00:00:00.049 Started by upstream project "autotest-per-patch" build number 120534 00:00:00.049 originally caused by: 00:00:00.049 Started by user sys_sgci 00:00:00.091 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.092 The recommended git tool is: git 00:00:00.092 using credential 00000000-0000-0000-0000-000000000002 00:00:00.094 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.115 Fetching changes from the remote Git repository 00:00:00.117 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.151 Using shallow fetch with depth 1 00:00:00.151 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.151 > git --version # timeout=10 00:00:00.192 > git --version # 'git version 2.39.2' 00:00:00.192 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.192 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.192 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.942 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.954 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.965 Checking out Revision 34845be7ae448993c10fd8929d8277dc075ec12e (FETCH_HEAD) 00:00:03.965 > git config core.sparsecheckout # timeout=10 00:00:03.978 > git read-tree -mu HEAD # timeout=10 00:00:03.994 > git checkout -f 34845be7ae448993c10fd8929d8277dc075ec12e # timeout=5 00:00:04.013 Commit message: "ansible/roles/custom_facts: Escape instances of "\"" 00:00:04.013 > git rev-list --no-walk 34845be7ae448993c10fd8929d8277dc075ec12e # timeout=10 00:00:04.096 [Pipeline] Start of Pipeline 00:00:04.108 [Pipeline] library 00:00:04.110 Loading library shm_lib@master 00:00:04.110 Library shm_lib@master is cached. Copying from home. 00:00:04.124 [Pipeline] node 00:00:19.182 Still waiting to schedule task 00:00:19.182 Waiting for next available executor on ‘vagrant-vm-host’ 00:25:03.328 Running on VM-host-SM4 in /var/jenkins/workspace/nvme-vg-autotest 00:25:03.330 [Pipeline] { 00:25:03.343 [Pipeline] catchError 00:25:03.344 [Pipeline] { 00:25:03.362 [Pipeline] wrap 00:25:03.373 [Pipeline] { 00:25:03.383 [Pipeline] stage 00:25:03.385 [Pipeline] { (Prologue) 00:25:03.408 [Pipeline] echo 00:25:03.409 Node: VM-host-SM4 00:25:03.416 [Pipeline] cleanWs 00:25:03.425 [WS-CLEANUP] Deleting project workspace... 00:25:03.425 [WS-CLEANUP] Deferred wipeout is used... 00:25:03.431 [WS-CLEANUP] done 00:25:03.595 [Pipeline] setCustomBuildProperty 00:25:03.669 [Pipeline] nodesByLabel 00:25:03.671 Found a total of 1 nodes with the 'sorcerer' label 00:25:03.683 [Pipeline] httpRequest 00:25:03.688 HttpMethod: GET 00:25:03.688 URL: http://10.211.164.101/packages/jbp_34845be7ae448993c10fd8929d8277dc075ec12e.tar.gz 00:25:03.689 Sending request to url: http://10.211.164.101/packages/jbp_34845be7ae448993c10fd8929d8277dc075ec12e.tar.gz 00:25:03.691 Response Code: HTTP/1.1 200 OK 00:25:03.691 Success: Status code 200 is in the accepted range: 200,404 00:25:03.692 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_34845be7ae448993c10fd8929d8277dc075ec12e.tar.gz 00:25:04.734 [Pipeline] sh 00:25:05.016 + tar --no-same-owner -xf jbp_34845be7ae448993c10fd8929d8277dc075ec12e.tar.gz 00:25:05.036 [Pipeline] httpRequest 00:25:05.040 HttpMethod: GET 00:25:05.041 URL: http://10.211.164.101/packages/spdk_ca13e8d81299aa0c70c2c78edce8198e076e45d5.tar.gz 00:25:05.042 Sending request to url: http://10.211.164.101/packages/spdk_ca13e8d81299aa0c70c2c78edce8198e076e45d5.tar.gz 00:25:05.054 Response Code: HTTP/1.1 200 OK 00:25:05.055 Success: Status code 200 is in the accepted range: 200,404 00:25:05.055 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_ca13e8d81299aa0c70c2c78edce8198e076e45d5.tar.gz 00:25:24.261 [Pipeline] sh 00:25:24.538 + tar --no-same-owner -xf spdk_ca13e8d81299aa0c70c2c78edce8198e076e45d5.tar.gz 00:25:27.859 [Pipeline] sh 00:25:28.141 + git -C spdk log --oneline -n5 00:25:28.141 ca13e8d81 nvmf: allow commands depending on qpair state 00:25:28.141 8ab287f46 nvmf: add QPAIR_CONNECTING state 00:25:28.141 37fc1bc2b nvmf: send connect response from qpair's thread 00:25:28.141 082dabc39 nvmf: rename QPAIR_ACTIVE -> QPAIR_ENABLED 00:25:28.141 b36283f3a nvmf: add spdk_nvmf_qpair_is_active() 00:25:28.159 [Pipeline] writeFile 00:25:28.175 [Pipeline] sh 00:25:28.456 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:25:28.467 [Pipeline] sh 00:25:28.749 + cat autorun-spdk.conf 00:25:28.749 SPDK_RUN_FUNCTIONAL_TEST=1 00:25:28.749 SPDK_TEST_NVME=1 00:25:28.749 SPDK_TEST_FTL=1 00:25:28.749 SPDK_TEST_ISAL=1 00:25:28.749 SPDK_RUN_ASAN=1 00:25:28.749 SPDK_RUN_UBSAN=1 00:25:28.749 SPDK_TEST_XNVME=1 00:25:28.749 SPDK_TEST_NVME_FDP=1 00:25:28.749 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:25:28.755 RUN_NIGHTLY=0 00:25:28.757 [Pipeline] } 00:25:28.775 [Pipeline] // stage 00:25:28.790 [Pipeline] stage 00:25:28.793 [Pipeline] { (Run VM) 00:25:28.808 [Pipeline] sh 00:25:29.089 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:25:29.089 + echo 'Start stage prepare_nvme.sh' 00:25:29.089 Start stage prepare_nvme.sh 00:25:29.089 + [[ -n 2 ]] 00:25:29.089 + disk_prefix=ex2 00:25:29.089 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:25:29.089 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:25:29.089 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:25:29.089 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:25:29.089 ++ SPDK_TEST_NVME=1 00:25:29.089 ++ SPDK_TEST_FTL=1 00:25:29.089 ++ SPDK_TEST_ISAL=1 00:25:29.089 ++ SPDK_RUN_ASAN=1 00:25:29.089 ++ SPDK_RUN_UBSAN=1 00:25:29.089 ++ SPDK_TEST_XNVME=1 00:25:29.089 ++ SPDK_TEST_NVME_FDP=1 00:25:29.089 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:25:29.089 ++ RUN_NIGHTLY=0 00:25:29.089 + cd /var/jenkins/workspace/nvme-vg-autotest 00:25:29.089 + nvme_files=() 00:25:29.089 + declare -A nvme_files 00:25:29.089 + backend_dir=/var/lib/libvirt/images/backends 00:25:29.089 + nvme_files['nvme.img']=5G 00:25:29.089 + nvme_files['nvme-cmb.img']=5G 00:25:29.089 + nvme_files['nvme-multi0.img']=4G 00:25:29.089 + nvme_files['nvme-multi1.img']=4G 00:25:29.089 + nvme_files['nvme-multi2.img']=4G 00:25:29.089 + nvme_files['nvme-openstack.img']=8G 00:25:29.089 + nvme_files['nvme-zns.img']=5G 00:25:29.089 + (( SPDK_TEST_NVME_PMR == 1 )) 00:25:29.089 + (( SPDK_TEST_FTL == 1 )) 00:25:29.089 + nvme_files["nvme-ftl.img"]=6G 00:25:29.089 + (( SPDK_TEST_NVME_FDP == 1 )) 00:25:29.089 + nvme_files["nvme-fdp.img"]=1G 00:25:29.089 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:25:29.089 + for nvme in "${!nvme_files[@]}" 00:25:29.089 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:25:29.089 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:25:29.089 + for nvme in "${!nvme_files[@]}" 00:25:29.089 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:25:29.348 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:25:29.348 + for nvme in "${!nvme_files[@]}" 00:25:29.348 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:25:29.348 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:25:29.348 + for nvme in "${!nvme_files[@]}" 00:25:29.348 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:25:29.606 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:25:29.606 + for nvme in "${!nvme_files[@]}" 00:25:29.606 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:25:30.542 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:25:30.542 + for nvme in "${!nvme_files[@]}" 00:25:30.542 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:25:30.542 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:25:30.542 + for nvme in "${!nvme_files[@]}" 00:25:30.542 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:25:30.542 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:25:30.542 + for nvme in "${!nvme_files[@]}" 00:25:30.542 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:25:30.542 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:25:30.542 + for nvme in "${!nvme_files[@]}" 00:25:30.542 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:25:31.476 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:25:31.476 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:25:31.734 + echo 'End stage prepare_nvme.sh' 00:25:31.734 End stage prepare_nvme.sh 00:25:31.746 [Pipeline] sh 00:25:32.022 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:25:32.022 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:25:32.022 00:25:32.022 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:25:32.022 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:25:32.022 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:25:32.022 HELP=0 00:25:32.022 DRY_RUN=0 00:25:32.022 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:25:32.022 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:25:32.022 NVME_AUTO_CREATE=0 00:25:32.022 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:25:32.023 NVME_CMB=,,,, 00:25:32.023 NVME_PMR=,,,, 00:25:32.023 NVME_ZNS=,,,, 00:25:32.023 NVME_MS=true,,,, 00:25:32.023 NVME_FDP=,,,on, 00:25:32.023 SPDK_VAGRANT_DISTRO=fedora38 00:25:32.023 SPDK_VAGRANT_VMCPU=10 00:25:32.023 SPDK_VAGRANT_VMRAM=12288 00:25:32.023 SPDK_VAGRANT_PROVIDER=libvirt 00:25:32.023 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:25:32.023 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:25:32.023 SPDK_OPENSTACK_NETWORK=0 00:25:32.023 VAGRANT_PACKAGE_BOX=0 00:25:32.023 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:25:32.023 FORCE_DISTRO=true 00:25:32.023 VAGRANT_BOX_VERSION= 00:25:32.023 EXTRA_VAGRANTFILES= 00:25:32.023 NIC_MODEL=e1000 00:25:32.023 00:25:32.023 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:25:32.023 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:25:35.347 Bringing machine 'default' up with 'libvirt' provider... 00:25:35.914 ==> default: Creating image (snapshot of base box volume). 00:25:36.172 ==> default: Creating domain with the following settings... 00:25:36.172 ==> default: -- Name: fedora38-38-1.6-1701806725-069-updated-1701632595-patched-kernel_default_1713430477_f51e260fc09838ad3f47 00:25:36.172 ==> default: -- Domain type: kvm 00:25:36.172 ==> default: -- Cpus: 10 00:25:36.172 ==> default: -- Feature: acpi 00:25:36.172 ==> default: -- Feature: apic 00:25:36.172 ==> default: -- Feature: pae 00:25:36.172 ==> default: -- Memory: 12288M 00:25:36.172 ==> default: -- Memory Backing: hugepages: 00:25:36.172 ==> default: -- Management MAC: 00:25:36.172 ==> default: -- Loader: 00:25:36.172 ==> default: -- Nvram: 00:25:36.172 ==> default: -- Base box: spdk/fedora38 00:25:36.172 ==> default: -- Storage pool: default 00:25:36.172 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1701806725-069-updated-1701632595-patched-kernel_default_1713430477_f51e260fc09838ad3f47.img (20G) 00:25:36.172 ==> default: -- Volume Cache: default 00:25:36.172 ==> default: -- Kernel: 00:25:36.172 ==> default: -- Initrd: 00:25:36.172 ==> default: -- Graphics Type: vnc 00:25:36.172 ==> default: -- Graphics Port: -1 00:25:36.172 ==> default: -- Graphics IP: 127.0.0.1 00:25:36.172 ==> default: -- Graphics Password: Not defined 00:25:36.172 ==> default: -- Video Type: cirrus 00:25:36.172 ==> default: -- Video VRAM: 9216 00:25:36.172 ==> default: -- Sound Type: 00:25:36.172 ==> default: -- Keymap: en-us 00:25:36.172 ==> default: -- TPM Path: 00:25:36.172 ==> default: -- INPUT: type=mouse, bus=ps2 00:25:36.172 ==> default: -- Command line args: 00:25:36.172 ==> default: -> value=-device, 00:25:36.172 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:25:36.172 ==> default: -> value=-drive, 00:25:36.172 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:25:36.172 ==> default: -> value=-device, 00:25:36.172 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:25:36.172 ==> default: -> value=-device, 00:25:36.172 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:25:36.172 ==> default: -> value=-drive, 00:25:36.172 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:25:36.172 ==> default: -> value=-device, 00:25:36.172 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:25:36.172 ==> default: -> value=-device, 00:25:36.172 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:25:36.172 ==> default: -> value=-drive, 00:25:36.172 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:25:36.172 ==> default: -> value=-device, 00:25:36.172 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:25:36.172 ==> default: -> value=-drive, 00:25:36.172 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:25:36.172 ==> default: -> value=-device, 00:25:36.172 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:25:36.172 ==> default: -> value=-drive, 00:25:36.172 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:25:36.172 ==> default: -> value=-device, 00:25:36.172 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:25:36.172 ==> default: -> value=-device, 00:25:36.172 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:25:36.172 ==> default: -> value=-device, 00:25:36.172 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:25:36.172 ==> default: -> value=-drive, 00:25:36.172 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:25:36.172 ==> default: -> value=-device, 00:25:36.172 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:25:36.430 ==> default: Creating shared folders metadata... 00:25:36.430 ==> default: Starting domain. 00:25:38.968 ==> default: Waiting for domain to get an IP address... 00:25:53.922 ==> default: Waiting for SSH to become available... 00:25:55.295 ==> default: Configuring and enabling network interfaces... 00:26:00.561 default: SSH address: 192.168.121.126:22 00:26:00.561 default: SSH username: vagrant 00:26:00.561 default: SSH auth method: private key 00:26:02.479 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:26:10.650 ==> default: Mounting SSHFS shared folder... 00:26:12.045 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:26:12.045 ==> default: Checking Mount.. 00:26:13.418 ==> default: Folder Successfully Mounted! 00:26:13.418 ==> default: Running provisioner: file... 00:26:14.353 default: ~/.gitconfig => .gitconfig 00:26:14.920 00:26:14.920 SUCCESS! 00:26:14.920 00:26:14.920 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:26:14.920 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:26:14.920 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:26:14.920 00:26:14.929 [Pipeline] } 00:26:14.947 [Pipeline] // stage 00:26:14.956 [Pipeline] dir 00:26:14.956 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:26:14.958 [Pipeline] { 00:26:14.972 [Pipeline] catchError 00:26:14.973 [Pipeline] { 00:26:14.987 [Pipeline] sh 00:26:15.266 + vagrant ssh-config --host vagrant 00:26:15.266 + sed -ne /^Host/,$p 00:26:15.266 + tee ssh_conf 00:26:18.560 Host vagrant 00:26:18.560 HostName 192.168.121.126 00:26:18.560 User vagrant 00:26:18.560 Port 22 00:26:18.560 UserKnownHostsFile /dev/null 00:26:18.560 StrictHostKeyChecking no 00:26:18.560 PasswordAuthentication no 00:26:18.560 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1701806725-069-updated-1701632595-patched-kernel/libvirt/fedora38 00:26:18.560 IdentitiesOnly yes 00:26:18.560 LogLevel FATAL 00:26:18.560 ForwardAgent yes 00:26:18.560 ForwardX11 yes 00:26:18.560 00:26:18.580 [Pipeline] withEnv 00:26:18.582 [Pipeline] { 00:26:18.598 [Pipeline] sh 00:26:18.900 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:26:18.900 source /etc/os-release 00:26:18.900 [[ -e /image.version ]] && img=$(< /image.version) 00:26:18.900 # Minimal, systemd-like check. 00:26:18.900 if [[ -e /.dockerenv ]]; then 00:26:18.900 # Clear garbage from the node's name: 00:26:18.900 # agt-er_autotest_547-896 -> autotest_547-896 00:26:18.900 # $HOSTNAME is the actual container id 00:26:18.900 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:26:18.900 if mountpoint -q /etc/hostname; then 00:26:18.900 # We can assume this is a mount from a host where container is running, 00:26:18.900 # so fetch its hostname to easily identify the target swarm worker. 00:26:18.900 container="$(< /etc/hostname) ($agent)" 00:26:18.900 else 00:26:18.900 # Fallback 00:26:18.900 container=$agent 00:26:18.900 fi 00:26:18.900 fi 00:26:18.900 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:26:18.900 00:26:18.923 [Pipeline] } 00:26:18.943 [Pipeline] // withEnv 00:26:18.952 [Pipeline] setCustomBuildProperty 00:26:18.967 [Pipeline] stage 00:26:18.969 [Pipeline] { (Tests) 00:26:18.989 [Pipeline] sh 00:26:19.298 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:26:19.324 [Pipeline] timeout 00:26:19.324 Timeout set to expire in 40 min 00:26:19.326 [Pipeline] { 00:26:19.343 [Pipeline] sh 00:26:19.716 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:26:20.288 HEAD is now at ca13e8d81 nvmf: allow commands depending on qpair state 00:26:20.300 [Pipeline] sh 00:26:20.580 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:26:20.851 [Pipeline] sh 00:26:21.131 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:26:21.405 [Pipeline] sh 00:26:21.684 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:26:21.943 ++ readlink -f spdk_repo 00:26:21.943 + DIR_ROOT=/home/vagrant/spdk_repo 00:26:21.943 + [[ -n /home/vagrant/spdk_repo ]] 00:26:21.943 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:26:21.943 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:26:21.943 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:26:21.943 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:26:21.943 + [[ -d /home/vagrant/spdk_repo/output ]] 00:26:21.943 + cd /home/vagrant/spdk_repo 00:26:21.943 + source /etc/os-release 00:26:21.943 ++ NAME='Fedora Linux' 00:26:21.943 ++ VERSION='38 (Cloud Edition)' 00:26:21.943 ++ ID=fedora 00:26:21.943 ++ VERSION_ID=38 00:26:21.943 ++ VERSION_CODENAME= 00:26:21.943 ++ PLATFORM_ID=platform:f38 00:26:21.943 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:26:21.943 ++ ANSI_COLOR='0;38;2;60;110;180' 00:26:21.943 ++ LOGO=fedora-logo-icon 00:26:21.943 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:26:21.943 ++ HOME_URL=https://fedoraproject.org/ 00:26:21.943 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:26:21.943 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:26:21.943 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:26:21.943 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:26:21.943 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:26:21.943 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:26:21.943 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:26:21.943 ++ SUPPORT_END=2024-05-14 00:26:21.943 ++ VARIANT='Cloud Edition' 00:26:21.943 ++ VARIANT_ID=cloud 00:26:21.943 + uname -a 00:26:21.943 Linux fedora38-cloud-1701806725-069-updated-1701632595 6.5.12-200.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Sun Dec 3 20:08:38 UTC 2023 x86_64 GNU/Linux 00:26:21.943 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:26:22.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:22.769 Hugepages 00:26:22.769 node hugesize free / total 00:26:22.769 node0 1048576kB 0 / 0 00:26:22.769 node0 2048kB 0 / 0 00:26:22.769 00:26:22.769 Type BDF Vendor Device NUMA Driver Device Block devices 00:26:22.769 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:26:22.769 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:26:22.769 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:26:22.769 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:26:22.769 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:26:22.769 + rm -f /tmp/spdk-ld-path 00:26:22.769 + source autorun-spdk.conf 00:26:22.769 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:26:22.769 ++ SPDK_TEST_NVME=1 00:26:22.769 ++ SPDK_TEST_FTL=1 00:26:22.769 ++ SPDK_TEST_ISAL=1 00:26:22.769 ++ SPDK_RUN_ASAN=1 00:26:22.769 ++ SPDK_RUN_UBSAN=1 00:26:22.769 ++ SPDK_TEST_XNVME=1 00:26:22.769 ++ SPDK_TEST_NVME_FDP=1 00:26:22.769 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:26:22.769 ++ RUN_NIGHTLY=0 00:26:22.769 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:26:22.769 + [[ -n '' ]] 00:26:22.769 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:26:22.769 + for M in /var/spdk/build-*-manifest.txt 00:26:22.769 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:26:22.769 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:26:22.769 + for M in /var/spdk/build-*-manifest.txt 00:26:22.769 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:26:22.769 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:26:22.769 + for M in /var/spdk/build-*-manifest.txt 00:26:22.769 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:26:22.769 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:26:22.769 ++ uname 00:26:22.769 + [[ Linux == \L\i\n\u\x ]] 00:26:22.769 + sudo dmesg -T 00:26:23.028 + sudo dmesg --clear 00:26:23.028 + dmesg_pid=5057 00:26:23.028 + [[ Fedora Linux == FreeBSD ]] 00:26:23.028 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:23.028 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:23.028 + sudo dmesg -Tw 00:26:23.028 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:26:23.028 + [[ -x /usr/src/fio-static/fio ]] 00:26:23.028 + export FIO_BIN=/usr/src/fio-static/fio 00:26:23.028 + FIO_BIN=/usr/src/fio-static/fio 00:26:23.028 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:26:23.028 + [[ ! -v VFIO_QEMU_BIN ]] 00:26:23.028 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:26:23.028 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:26:23.028 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:26:23.028 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:26:23.028 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:26:23.028 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:26:23.028 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:26:23.028 Test configuration: 00:26:23.028 SPDK_RUN_FUNCTIONAL_TEST=1 00:26:23.028 SPDK_TEST_NVME=1 00:26:23.028 SPDK_TEST_FTL=1 00:26:23.028 SPDK_TEST_ISAL=1 00:26:23.028 SPDK_RUN_ASAN=1 00:26:23.028 SPDK_RUN_UBSAN=1 00:26:23.028 SPDK_TEST_XNVME=1 00:26:23.028 SPDK_TEST_NVME_FDP=1 00:26:23.028 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:26:23.028 RUN_NIGHTLY=0 08:55:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:23.028 08:55:25 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:26:23.028 08:55:25 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.028 08:55:25 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.028 08:55:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.028 08:55:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.028 08:55:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.028 08:55:25 -- paths/export.sh@5 -- $ export PATH 00:26:23.028 08:55:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.028 08:55:25 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:26:23.028 08:55:25 -- common/autobuild_common.sh@435 -- $ date +%s 00:26:23.028 08:55:25 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713430525.XXXXXX 00:26:23.028 08:55:25 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713430525.wYYYrH 00:26:23.028 08:55:25 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:26:23.028 08:55:25 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:26:23.028 08:55:25 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:26:23.028 08:55:25 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:26:23.028 08:55:25 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:26:23.028 08:55:25 -- common/autobuild_common.sh@451 -- $ get_config_params 00:26:23.028 08:55:25 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:26:23.028 08:55:25 -- common/autotest_common.sh@10 -- $ set +x 00:26:23.028 08:55:25 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:26:23.028 08:55:25 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:26:23.028 08:55:25 -- pm/common@17 -- $ local monitor 00:26:23.028 08:55:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:23.028 08:55:25 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5093 00:26:23.028 08:55:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:23.028 08:55:25 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5095 00:26:23.028 08:55:25 -- pm/common@26 -- $ sleep 1 00:26:23.028 08:55:25 -- pm/common@21 -- $ date +%s 00:26:23.028 08:55:25 -- pm/common@21 -- $ date +%s 00:26:23.028 08:55:25 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713430525 00:26:23.028 08:55:25 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713430525 00:26:23.287 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713430525_collect-vmstat.pm.log 00:26:23.287 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713430525_collect-cpu-load.pm.log 00:26:24.224 08:55:26 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:26:24.224 08:55:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:26:24.224 08:55:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:26:24.224 08:55:26 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:26:24.224 08:55:26 -- spdk/autobuild.sh@16 -- $ date -u 00:26:24.224 Thu Apr 18 08:55:26 AM UTC 2024 00:26:24.224 08:55:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:26:24.224 v24.05-pre-414-gca13e8d81 00:26:24.224 08:55:26 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:26:24.224 08:55:26 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:26:24.224 08:55:26 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:26:24.224 08:55:26 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:26:24.224 08:55:26 -- common/autotest_common.sh@10 -- $ set +x 00:26:24.224 ************************************ 00:26:24.224 START TEST asan 00:26:24.224 ************************************ 00:26:24.224 using asan 00:26:24.224 08:55:26 -- common/autotest_common.sh@1111 -- $ echo 'using asan' 00:26:24.224 00:26:24.224 real 0m0.000s 00:26:24.224 user 0m0.000s 00:26:24.224 sys 0m0.000s 00:26:24.224 08:55:26 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:26:24.224 08:55:26 -- common/autotest_common.sh@10 -- $ set +x 00:26:24.224 ************************************ 00:26:24.224 END TEST asan 00:26:24.224 ************************************ 00:26:24.224 08:55:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:26:24.224 08:55:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:26:24.224 08:55:26 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:26:24.224 08:55:26 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:26:24.224 08:55:26 -- common/autotest_common.sh@10 -- $ set +x 00:26:24.224 ************************************ 00:26:24.224 START TEST ubsan 00:26:24.224 ************************************ 00:26:24.224 using ubsan 00:26:24.224 08:55:26 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:26:24.224 00:26:24.224 real 0m0.000s 00:26:24.224 user 0m0.000s 00:26:24.224 sys 0m0.000s 00:26:24.224 08:55:26 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:26:24.224 ************************************ 00:26:24.224 END TEST ubsan 00:26:24.224 ************************************ 00:26:24.224 08:55:26 -- common/autotest_common.sh@10 -- $ set +x 00:26:24.483 08:55:26 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:26:24.483 08:55:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:26:24.483 08:55:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:26:24.483 08:55:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:26:24.483 08:55:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:26:24.483 08:55:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:26:24.483 08:55:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:26:24.483 08:55:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:26:24.483 08:55:26 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:26:24.483 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:24.483 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:25.050 Using 'verbs' RDMA provider 00:26:40.873 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:26:55.768 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:26:55.768 Creating mk/config.mk...done. 00:26:55.768 Creating mk/cc.flags.mk...done. 00:26:55.768 Type 'make' to build. 00:26:55.768 08:55:56 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:26:55.768 08:55:56 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:26:55.768 08:55:56 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:26:55.768 08:55:56 -- common/autotest_common.sh@10 -- $ set +x 00:26:55.768 ************************************ 00:26:55.768 START TEST make 00:26:55.768 ************************************ 00:26:55.768 08:55:56 -- common/autotest_common.sh@1111 -- $ make -j10 00:26:55.768 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:26:55.768 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:26:55.768 meson setup builddir \ 00:26:55.768 -Dwith-libaio=enabled \ 00:26:55.768 -Dwith-liburing=enabled \ 00:26:55.768 -Dwith-libvfn=disabled \ 00:26:55.768 -Dwith-spdk=false && \ 00:26:55.768 meson compile -C builddir && \ 00:26:55.768 cd -) 00:26:55.768 make[1]: Nothing to be done for 'all'. 00:26:58.298 The Meson build system 00:26:58.298 Version: 1.3.0 00:26:58.298 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:26:58.298 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:26:58.298 Build type: native build 00:26:58.298 Project name: xnvme 00:26:58.298 Project version: 0.7.3 00:26:58.298 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:26:58.298 C linker for the host machine: cc ld.bfd 2.39-16 00:26:58.298 Host machine cpu family: x86_64 00:26:58.298 Host machine cpu: x86_64 00:26:58.298 Message: host_machine.system: linux 00:26:58.298 Compiler for C supports arguments -Wno-missing-braces: YES 00:26:58.298 Compiler for C supports arguments -Wno-cast-function-type: YES 00:26:58.298 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:26:58.298 Run-time dependency threads found: YES 00:26:58.298 Has header "setupapi.h" : NO 00:26:58.298 Has header "linux/blkzoned.h" : YES 00:26:58.298 Has header "linux/blkzoned.h" : YES (cached) 00:26:58.298 Has header "libaio.h" : YES 00:26:58.298 Library aio found: YES 00:26:58.298 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:26:58.298 Run-time dependency liburing found: YES 2.2 00:26:58.298 Dependency libvfn skipped: feature with-libvfn disabled 00:26:58.298 Run-time dependency appleframeworks found: NO (tried framework) 00:26:58.298 Run-time dependency appleframeworks found: NO (tried framework) 00:26:58.298 Configuring xnvme_config.h using configuration 00:26:58.298 Configuring xnvme.spec using configuration 00:26:58.298 Run-time dependency bash-completion found: YES 2.11 00:26:58.298 Message: Bash-completions: /usr/share/bash-completion/completions 00:26:58.298 Program cp found: YES (/usr/bin/cp) 00:26:58.298 Has header "winsock2.h" : NO 00:26:58.298 Has header "dbghelp.h" : NO 00:26:58.298 Library rpcrt4 found: NO 00:26:58.298 Library rt found: YES 00:26:58.298 Checking for function "clock_gettime" with dependency -lrt: YES 00:26:58.299 Found CMake: /usr/bin/cmake (3.27.7) 00:26:58.299 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:26:58.299 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:26:58.299 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:26:58.299 Build targets in project: 32 00:26:58.299 00:26:58.299 xnvme 0.7.3 00:26:58.299 00:26:58.299 User defined options 00:26:58.299 with-libaio : enabled 00:26:58.299 with-liburing: enabled 00:26:58.299 with-libvfn : disabled 00:26:58.299 with-spdk : false 00:26:58.299 00:26:58.299 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:26:58.557 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:26:58.557 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:26:58.557 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:26:58.557 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:26:58.557 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:26:58.557 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:26:58.557 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:26:58.557 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:26:58.814 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:26:58.814 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:26:58.814 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:26:58.814 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:26:58.814 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:26:58.814 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:26:58.815 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:26:58.815 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:26:58.815 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:26:58.815 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:26:58.815 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:26:58.815 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:26:58.815 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:26:58.815 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:26:58.815 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:26:58.815 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:26:59.072 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:26:59.072 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:26:59.072 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:26:59.072 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:26:59.072 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:26:59.072 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:26:59.072 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:26:59.072 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:26:59.072 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:26:59.073 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:26:59.073 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:26:59.073 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:26:59.073 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:26:59.073 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:26:59.073 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:26:59.073 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:26:59.073 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:26:59.073 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:26:59.073 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:26:59.073 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:26:59.073 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:26:59.073 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:26:59.073 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:26:59.073 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:26:59.073 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:26:59.073 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:26:59.073 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:26:59.073 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:26:59.073 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:26:59.073 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:26:59.073 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:26:59.330 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:26:59.330 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:26:59.330 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:26:59.330 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:26:59.330 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:26:59.330 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:26:59.330 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:26:59.330 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:26:59.330 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:26:59.330 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:26:59.330 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:26:59.330 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:26:59.330 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:26:59.330 [68/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:26:59.330 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:26:59.330 [70/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:26:59.331 [71/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:26:59.589 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:26:59.589 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:26:59.589 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:26:59.589 [75/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:26:59.589 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:26:59.589 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:26:59.589 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:26:59.589 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:26:59.589 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:26:59.589 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:26:59.589 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:26:59.589 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:26:59.589 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:26:59.589 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:26:59.589 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:26:59.589 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:26:59.589 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:26:59.589 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:26:59.848 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:26:59.848 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:26:59.848 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:26:59.848 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:26:59.848 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:26:59.848 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:26:59.848 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:26:59.848 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:26:59.848 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:26:59.848 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:26:59.848 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:26:59.848 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:26:59.849 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:26:59.849 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:26:59.849 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:26:59.849 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:26:59.849 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:26:59.849 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:26:59.849 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:26:59.849 [109/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:26:59.849 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:26:59.849 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:26:59.849 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:26:59.849 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:26:59.849 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:27:00.107 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:27:00.107 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:27:00.107 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:27:00.107 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:27:00.107 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:27:00.107 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:27:00.107 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:27:00.107 [122/203] Linking target lib/libxnvme.so 00:27:00.107 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:27:00.107 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:27:00.107 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:27:00.107 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:27:00.107 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:27:00.107 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:27:00.107 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:27:00.107 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:27:00.107 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:27:00.107 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:27:00.107 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:27:00.108 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:27:00.108 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:27:00.108 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:27:00.108 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:27:00.367 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:27:00.367 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:27:00.367 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:27:00.367 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:27:00.367 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:27:00.367 [143/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:27:00.367 [144/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:27:00.367 [145/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:27:00.367 [146/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:27:00.367 [147/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:27:00.367 [148/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:27:00.626 [149/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:27:00.626 [150/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:27:00.626 [151/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:27:00.626 [152/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:27:00.626 [153/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:27:00.626 [154/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:27:00.626 [155/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:27:00.626 [156/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:27:00.626 [157/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:27:00.626 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:27:00.626 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:27:00.626 [160/203] Compiling C object tools/xdd.p/xdd.c.o 00:27:00.626 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:27:00.626 [162/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:27:00.626 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:27:00.920 [164/203] Compiling C object tools/zoned.p/zoned.c.o 00:27:00.920 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:27:00.920 [166/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:27:00.920 [167/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:27:00.920 [168/203] Compiling C object tools/kvs.p/kvs.c.o 00:27:00.920 [169/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:27:00.920 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:27:00.920 [171/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:27:00.920 [172/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:27:00.920 [173/203] Linking static target lib/libxnvme.a 00:27:00.920 [174/203] Linking target tests/xnvme_tests_cli 00:27:00.920 [175/203] Linking target tests/xnvme_tests_async_intf 00:27:00.920 [176/203] Linking target tests/xnvme_tests_buf 00:27:00.920 [177/203] Linking target tests/xnvme_tests_enum 00:27:00.920 [178/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:27:00.920 [179/203] Linking target tests/xnvme_tests_xnvme_cli 00:27:01.177 [180/203] Linking target tests/xnvme_tests_scc 00:27:01.177 [181/203] Linking target tests/xnvme_tests_lblk 00:27:01.177 [182/203] Linking target tests/xnvme_tests_ioworker 00:27:01.177 [183/203] Linking target tests/xnvme_tests_znd_append 00:27:01.177 [184/203] Linking target tests/xnvme_tests_znd_state 00:27:01.177 [185/203] Linking target tests/xnvme_tests_xnvme_file 00:27:01.177 [186/203] Linking target tests/xnvme_tests_znd_explicit_open 00:27:01.177 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:27:01.177 [188/203] Linking target tests/xnvme_tests_map 00:27:01.177 [189/203] Linking target tests/xnvme_tests_kvs 00:27:01.177 [190/203] Linking target tools/lblk 00:27:01.177 [191/203] Linking target tools/xdd 00:27:01.177 [192/203] Linking target tools/xnvme 00:27:01.177 [193/203] Linking target tools/xnvme_file 00:27:01.177 [194/203] Linking target examples/xnvme_dev 00:27:01.177 [195/203] Linking target tools/kvs 00:27:01.177 [196/203] Linking target examples/zoned_io_async 00:27:01.177 [197/203] Linking target tools/zoned 00:27:01.177 [198/203] Linking target examples/xnvme_enum 00:27:01.177 [199/203] Linking target examples/xnvme_io_async 00:27:01.177 [200/203] Linking target examples/xnvme_single_async 00:27:01.177 [201/203] Linking target examples/xnvme_single_sync 00:27:01.177 [202/203] Linking target examples/xnvme_hello 00:27:01.177 [203/203] Linking target examples/zoned_io_sync 00:27:01.177 INFO: autodetecting backend as ninja 00:27:01.177 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:27:01.177 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:27:11.145 The Meson build system 00:27:11.145 Version: 1.3.0 00:27:11.145 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:27:11.145 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:27:11.145 Build type: native build 00:27:11.145 Program cat found: YES (/usr/bin/cat) 00:27:11.145 Project name: DPDK 00:27:11.145 Project version: 23.11.0 00:27:11.145 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:27:11.145 C linker for the host machine: cc ld.bfd 2.39-16 00:27:11.145 Host machine cpu family: x86_64 00:27:11.145 Host machine cpu: x86_64 00:27:11.145 Message: ## Building in Developer Mode ## 00:27:11.145 Program pkg-config found: YES (/usr/bin/pkg-config) 00:27:11.145 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:27:11.145 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:27:11.145 Program python3 found: YES (/usr/bin/python3) 00:27:11.145 Program cat found: YES (/usr/bin/cat) 00:27:11.145 Compiler for C supports arguments -march=native: YES 00:27:11.145 Checking for size of "void *" : 8 00:27:11.145 Checking for size of "void *" : 8 (cached) 00:27:11.145 Library m found: YES 00:27:11.145 Library numa found: YES 00:27:11.145 Has header "numaif.h" : YES 00:27:11.145 Library fdt found: NO 00:27:11.145 Library execinfo found: NO 00:27:11.145 Has header "execinfo.h" : YES 00:27:11.145 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:27:11.145 Run-time dependency libarchive found: NO (tried pkgconfig) 00:27:11.145 Run-time dependency libbsd found: NO (tried pkgconfig) 00:27:11.145 Run-time dependency jansson found: NO (tried pkgconfig) 00:27:11.145 Run-time dependency openssl found: YES 3.0.9 00:27:11.145 Run-time dependency libpcap found: YES 1.10.4 00:27:11.145 Has header "pcap.h" with dependency libpcap: YES 00:27:11.145 Compiler for C supports arguments -Wcast-qual: YES 00:27:11.145 Compiler for C supports arguments -Wdeprecated: YES 00:27:11.145 Compiler for C supports arguments -Wformat: YES 00:27:11.145 Compiler for C supports arguments -Wformat-nonliteral: NO 00:27:11.145 Compiler for C supports arguments -Wformat-security: NO 00:27:11.145 Compiler for C supports arguments -Wmissing-declarations: YES 00:27:11.145 Compiler for C supports arguments -Wmissing-prototypes: YES 00:27:11.145 Compiler for C supports arguments -Wnested-externs: YES 00:27:11.145 Compiler for C supports arguments -Wold-style-definition: YES 00:27:11.145 Compiler for C supports arguments -Wpointer-arith: YES 00:27:11.145 Compiler for C supports arguments -Wsign-compare: YES 00:27:11.145 Compiler for C supports arguments -Wstrict-prototypes: YES 00:27:11.145 Compiler for C supports arguments -Wundef: YES 00:27:11.145 Compiler for C supports arguments -Wwrite-strings: YES 00:27:11.145 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:27:11.145 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:27:11.145 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:27:11.145 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:27:11.145 Program objdump found: YES (/usr/bin/objdump) 00:27:11.145 Compiler for C supports arguments -mavx512f: YES 00:27:11.145 Checking if "AVX512 checking" compiles: YES 00:27:11.145 Fetching value of define "__SSE4_2__" : 1 00:27:11.145 Fetching value of define "__AES__" : 1 00:27:11.145 Fetching value of define "__AVX__" : 1 00:27:11.145 Fetching value of define "__AVX2__" : 1 00:27:11.145 Fetching value of define "__AVX512BW__" : 1 00:27:11.145 Fetching value of define "__AVX512CD__" : 1 00:27:11.145 Fetching value of define "__AVX512DQ__" : 1 00:27:11.145 Fetching value of define "__AVX512F__" : 1 00:27:11.145 Fetching value of define "__AVX512VL__" : 1 00:27:11.145 Fetching value of define "__PCLMUL__" : 1 00:27:11.145 Fetching value of define "__RDRND__" : 1 00:27:11.145 Fetching value of define "__RDSEED__" : 1 00:27:11.145 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:27:11.145 Fetching value of define "__znver1__" : (undefined) 00:27:11.145 Fetching value of define "__znver2__" : (undefined) 00:27:11.145 Fetching value of define "__znver3__" : (undefined) 00:27:11.145 Fetching value of define "__znver4__" : (undefined) 00:27:11.145 Library asan found: YES 00:27:11.145 Compiler for C supports arguments -Wno-format-truncation: YES 00:27:11.145 Message: lib/log: Defining dependency "log" 00:27:11.145 Message: lib/kvargs: Defining dependency "kvargs" 00:27:11.145 Message: lib/telemetry: Defining dependency "telemetry" 00:27:11.145 Library rt found: YES 00:27:11.145 Checking for function "getentropy" : NO 00:27:11.145 Message: lib/eal: Defining dependency "eal" 00:27:11.145 Message: lib/ring: Defining dependency "ring" 00:27:11.145 Message: lib/rcu: Defining dependency "rcu" 00:27:11.145 Message: lib/mempool: Defining dependency "mempool" 00:27:11.145 Message: lib/mbuf: Defining dependency "mbuf" 00:27:11.145 Fetching value of define "__PCLMUL__" : 1 (cached) 00:27:11.145 Fetching value of define "__AVX512F__" : 1 (cached) 00:27:11.145 Fetching value of define "__AVX512BW__" : 1 (cached) 00:27:11.145 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:27:11.145 Fetching value of define "__AVX512VL__" : 1 (cached) 00:27:11.145 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:27:11.145 Compiler for C supports arguments -mpclmul: YES 00:27:11.145 Compiler for C supports arguments -maes: YES 00:27:11.145 Compiler for C supports arguments -mavx512f: YES (cached) 00:27:11.145 Compiler for C supports arguments -mavx512bw: YES 00:27:11.145 Compiler for C supports arguments -mavx512dq: YES 00:27:11.145 Compiler for C supports arguments -mavx512vl: YES 00:27:11.145 Compiler for C supports arguments -mvpclmulqdq: YES 00:27:11.145 Compiler for C supports arguments -mavx2: YES 00:27:11.145 Compiler for C supports arguments -mavx: YES 00:27:11.145 Message: lib/net: Defining dependency "net" 00:27:11.145 Message: lib/meter: Defining dependency "meter" 00:27:11.145 Message: lib/ethdev: Defining dependency "ethdev" 00:27:11.145 Message: lib/pci: Defining dependency "pci" 00:27:11.145 Message: lib/cmdline: Defining dependency "cmdline" 00:27:11.145 Message: lib/hash: Defining dependency "hash" 00:27:11.145 Message: lib/timer: Defining dependency "timer" 00:27:11.145 Message: lib/compressdev: Defining dependency "compressdev" 00:27:11.145 Message: lib/cryptodev: Defining dependency "cryptodev" 00:27:11.145 Message: lib/dmadev: Defining dependency "dmadev" 00:27:11.145 Compiler for C supports arguments -Wno-cast-qual: YES 00:27:11.145 Message: lib/power: Defining dependency "power" 00:27:11.145 Message: lib/reorder: Defining dependency "reorder" 00:27:11.145 Message: lib/security: Defining dependency "security" 00:27:11.145 Has header "linux/userfaultfd.h" : YES 00:27:11.145 Has header "linux/vduse.h" : YES 00:27:11.145 Message: lib/vhost: Defining dependency "vhost" 00:27:11.145 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:27:11.145 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:27:11.145 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:27:11.145 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:27:11.146 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:27:11.146 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:27:11.146 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:27:11.146 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:27:11.146 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:27:11.146 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:27:11.146 Program doxygen found: YES (/usr/bin/doxygen) 00:27:11.146 Configuring doxy-api-html.conf using configuration 00:27:11.146 Configuring doxy-api-man.conf using configuration 00:27:11.146 Program mandb found: YES (/usr/bin/mandb) 00:27:11.146 Program sphinx-build found: NO 00:27:11.146 Configuring rte_build_config.h using configuration 00:27:11.146 Message: 00:27:11.146 ================= 00:27:11.146 Applications Enabled 00:27:11.146 ================= 00:27:11.146 00:27:11.146 apps: 00:27:11.146 00:27:11.146 00:27:11.146 Message: 00:27:11.146 ================= 00:27:11.146 Libraries Enabled 00:27:11.146 ================= 00:27:11.146 00:27:11.146 libs: 00:27:11.146 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:27:11.146 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:27:11.146 cryptodev, dmadev, power, reorder, security, vhost, 00:27:11.146 00:27:11.146 Message: 00:27:11.146 =============== 00:27:11.146 Drivers Enabled 00:27:11.146 =============== 00:27:11.146 00:27:11.146 common: 00:27:11.146 00:27:11.146 bus: 00:27:11.146 pci, vdev, 00:27:11.146 mempool: 00:27:11.146 ring, 00:27:11.146 dma: 00:27:11.146 00:27:11.146 net: 00:27:11.146 00:27:11.146 crypto: 00:27:11.146 00:27:11.146 compress: 00:27:11.146 00:27:11.146 vdpa: 00:27:11.146 00:27:11.146 00:27:11.146 Message: 00:27:11.146 ================= 00:27:11.146 Content Skipped 00:27:11.146 ================= 00:27:11.146 00:27:11.146 apps: 00:27:11.146 dumpcap: explicitly disabled via build config 00:27:11.146 graph: explicitly disabled via build config 00:27:11.146 pdump: explicitly disabled via build config 00:27:11.146 proc-info: explicitly disabled via build config 00:27:11.146 test-acl: explicitly disabled via build config 00:27:11.146 test-bbdev: explicitly disabled via build config 00:27:11.146 test-cmdline: explicitly disabled via build config 00:27:11.146 test-compress-perf: explicitly disabled via build config 00:27:11.146 test-crypto-perf: explicitly disabled via build config 00:27:11.146 test-dma-perf: explicitly disabled via build config 00:27:11.146 test-eventdev: explicitly disabled via build config 00:27:11.146 test-fib: explicitly disabled via build config 00:27:11.146 test-flow-perf: explicitly disabled via build config 00:27:11.146 test-gpudev: explicitly disabled via build config 00:27:11.146 test-mldev: explicitly disabled via build config 00:27:11.146 test-pipeline: explicitly disabled via build config 00:27:11.146 test-pmd: explicitly disabled via build config 00:27:11.146 test-regex: explicitly disabled via build config 00:27:11.146 test-sad: explicitly disabled via build config 00:27:11.146 test-security-perf: explicitly disabled via build config 00:27:11.146 00:27:11.146 libs: 00:27:11.146 metrics: explicitly disabled via build config 00:27:11.146 acl: explicitly disabled via build config 00:27:11.146 bbdev: explicitly disabled via build config 00:27:11.146 bitratestats: explicitly disabled via build config 00:27:11.146 bpf: explicitly disabled via build config 00:27:11.146 cfgfile: explicitly disabled via build config 00:27:11.146 distributor: explicitly disabled via build config 00:27:11.146 efd: explicitly disabled via build config 00:27:11.146 eventdev: explicitly disabled via build config 00:27:11.146 dispatcher: explicitly disabled via build config 00:27:11.146 gpudev: explicitly disabled via build config 00:27:11.146 gro: explicitly disabled via build config 00:27:11.146 gso: explicitly disabled via build config 00:27:11.146 ip_frag: explicitly disabled via build config 00:27:11.146 jobstats: explicitly disabled via build config 00:27:11.146 latencystats: explicitly disabled via build config 00:27:11.146 lpm: explicitly disabled via build config 00:27:11.146 member: explicitly disabled via build config 00:27:11.146 pcapng: explicitly disabled via build config 00:27:11.146 rawdev: explicitly disabled via build config 00:27:11.146 regexdev: explicitly disabled via build config 00:27:11.146 mldev: explicitly disabled via build config 00:27:11.146 rib: explicitly disabled via build config 00:27:11.146 sched: explicitly disabled via build config 00:27:11.146 stack: explicitly disabled via build config 00:27:11.146 ipsec: explicitly disabled via build config 00:27:11.146 pdcp: explicitly disabled via build config 00:27:11.146 fib: explicitly disabled via build config 00:27:11.146 port: explicitly disabled via build config 00:27:11.146 pdump: explicitly disabled via build config 00:27:11.146 table: explicitly disabled via build config 00:27:11.146 pipeline: explicitly disabled via build config 00:27:11.146 graph: explicitly disabled via build config 00:27:11.146 node: explicitly disabled via build config 00:27:11.146 00:27:11.146 drivers: 00:27:11.146 common/cpt: not in enabled drivers build config 00:27:11.146 common/dpaax: not in enabled drivers build config 00:27:11.146 common/iavf: not in enabled drivers build config 00:27:11.146 common/idpf: not in enabled drivers build config 00:27:11.146 common/mvep: not in enabled drivers build config 00:27:11.146 common/octeontx: not in enabled drivers build config 00:27:11.146 bus/auxiliary: not in enabled drivers build config 00:27:11.146 bus/cdx: not in enabled drivers build config 00:27:11.146 bus/dpaa: not in enabled drivers build config 00:27:11.146 bus/fslmc: not in enabled drivers build config 00:27:11.146 bus/ifpga: not in enabled drivers build config 00:27:11.146 bus/platform: not in enabled drivers build config 00:27:11.146 bus/vmbus: not in enabled drivers build config 00:27:11.146 common/cnxk: not in enabled drivers build config 00:27:11.146 common/mlx5: not in enabled drivers build config 00:27:11.146 common/nfp: not in enabled drivers build config 00:27:11.146 common/qat: not in enabled drivers build config 00:27:11.146 common/sfc_efx: not in enabled drivers build config 00:27:11.146 mempool/bucket: not in enabled drivers build config 00:27:11.146 mempool/cnxk: not in enabled drivers build config 00:27:11.146 mempool/dpaa: not in enabled drivers build config 00:27:11.146 mempool/dpaa2: not in enabled drivers build config 00:27:11.146 mempool/octeontx: not in enabled drivers build config 00:27:11.146 mempool/stack: not in enabled drivers build config 00:27:11.146 dma/cnxk: not in enabled drivers build config 00:27:11.146 dma/dpaa: not in enabled drivers build config 00:27:11.146 dma/dpaa2: not in enabled drivers build config 00:27:11.146 dma/hisilicon: not in enabled drivers build config 00:27:11.146 dma/idxd: not in enabled drivers build config 00:27:11.146 dma/ioat: not in enabled drivers build config 00:27:11.146 dma/skeleton: not in enabled drivers build config 00:27:11.146 net/af_packet: not in enabled drivers build config 00:27:11.146 net/af_xdp: not in enabled drivers build config 00:27:11.146 net/ark: not in enabled drivers build config 00:27:11.146 net/atlantic: not in enabled drivers build config 00:27:11.146 net/avp: not in enabled drivers build config 00:27:11.146 net/axgbe: not in enabled drivers build config 00:27:11.146 net/bnx2x: not in enabled drivers build config 00:27:11.146 net/bnxt: not in enabled drivers build config 00:27:11.146 net/bonding: not in enabled drivers build config 00:27:11.146 net/cnxk: not in enabled drivers build config 00:27:11.146 net/cpfl: not in enabled drivers build config 00:27:11.146 net/cxgbe: not in enabled drivers build config 00:27:11.146 net/dpaa: not in enabled drivers build config 00:27:11.146 net/dpaa2: not in enabled drivers build config 00:27:11.146 net/e1000: not in enabled drivers build config 00:27:11.146 net/ena: not in enabled drivers build config 00:27:11.146 net/enetc: not in enabled drivers build config 00:27:11.146 net/enetfec: not in enabled drivers build config 00:27:11.146 net/enic: not in enabled drivers build config 00:27:11.146 net/failsafe: not in enabled drivers build config 00:27:11.146 net/fm10k: not in enabled drivers build config 00:27:11.146 net/gve: not in enabled drivers build config 00:27:11.146 net/hinic: not in enabled drivers build config 00:27:11.146 net/hns3: not in enabled drivers build config 00:27:11.146 net/i40e: not in enabled drivers build config 00:27:11.146 net/iavf: not in enabled drivers build config 00:27:11.146 net/ice: not in enabled drivers build config 00:27:11.146 net/idpf: not in enabled drivers build config 00:27:11.146 net/igc: not in enabled drivers build config 00:27:11.146 net/ionic: not in enabled drivers build config 00:27:11.146 net/ipn3ke: not in enabled drivers build config 00:27:11.146 net/ixgbe: not in enabled drivers build config 00:27:11.146 net/mana: not in enabled drivers build config 00:27:11.146 net/memif: not in enabled drivers build config 00:27:11.146 net/mlx4: not in enabled drivers build config 00:27:11.146 net/mlx5: not in enabled drivers build config 00:27:11.146 net/mvneta: not in enabled drivers build config 00:27:11.146 net/mvpp2: not in enabled drivers build config 00:27:11.146 net/netvsc: not in enabled drivers build config 00:27:11.146 net/nfb: not in enabled drivers build config 00:27:11.146 net/nfp: not in enabled drivers build config 00:27:11.146 net/ngbe: not in enabled drivers build config 00:27:11.146 net/null: not in enabled drivers build config 00:27:11.146 net/octeontx: not in enabled drivers build config 00:27:11.146 net/octeon_ep: not in enabled drivers build config 00:27:11.146 net/pcap: not in enabled drivers build config 00:27:11.146 net/pfe: not in enabled drivers build config 00:27:11.146 net/qede: not in enabled drivers build config 00:27:11.146 net/ring: not in enabled drivers build config 00:27:11.146 net/sfc: not in enabled drivers build config 00:27:11.146 net/softnic: not in enabled drivers build config 00:27:11.146 net/tap: not in enabled drivers build config 00:27:11.146 net/thunderx: not in enabled drivers build config 00:27:11.146 net/txgbe: not in enabled drivers build config 00:27:11.146 net/vdev_netvsc: not in enabled drivers build config 00:27:11.146 net/vhost: not in enabled drivers build config 00:27:11.146 net/virtio: not in enabled drivers build config 00:27:11.146 net/vmxnet3: not in enabled drivers build config 00:27:11.146 raw/*: missing internal dependency, "rawdev" 00:27:11.146 crypto/armv8: not in enabled drivers build config 00:27:11.147 crypto/bcmfs: not in enabled drivers build config 00:27:11.147 crypto/caam_jr: not in enabled drivers build config 00:27:11.147 crypto/ccp: not in enabled drivers build config 00:27:11.147 crypto/cnxk: not in enabled drivers build config 00:27:11.147 crypto/dpaa_sec: not in enabled drivers build config 00:27:11.147 crypto/dpaa2_sec: not in enabled drivers build config 00:27:11.147 crypto/ipsec_mb: not in enabled drivers build config 00:27:11.147 crypto/mlx5: not in enabled drivers build config 00:27:11.147 crypto/mvsam: not in enabled drivers build config 00:27:11.147 crypto/nitrox: not in enabled drivers build config 00:27:11.147 crypto/null: not in enabled drivers build config 00:27:11.147 crypto/octeontx: not in enabled drivers build config 00:27:11.147 crypto/openssl: not in enabled drivers build config 00:27:11.147 crypto/scheduler: not in enabled drivers build config 00:27:11.147 crypto/uadk: not in enabled drivers build config 00:27:11.147 crypto/virtio: not in enabled drivers build config 00:27:11.147 compress/isal: not in enabled drivers build config 00:27:11.147 compress/mlx5: not in enabled drivers build config 00:27:11.147 compress/octeontx: not in enabled drivers build config 00:27:11.147 compress/zlib: not in enabled drivers build config 00:27:11.147 regex/*: missing internal dependency, "regexdev" 00:27:11.147 ml/*: missing internal dependency, "mldev" 00:27:11.147 vdpa/ifc: not in enabled drivers build config 00:27:11.147 vdpa/mlx5: not in enabled drivers build config 00:27:11.147 vdpa/nfp: not in enabled drivers build config 00:27:11.147 vdpa/sfc: not in enabled drivers build config 00:27:11.147 event/*: missing internal dependency, "eventdev" 00:27:11.147 baseband/*: missing internal dependency, "bbdev" 00:27:11.147 gpu/*: missing internal dependency, "gpudev" 00:27:11.147 00:27:11.147 00:27:11.147 Build targets in project: 85 00:27:11.147 00:27:11.147 DPDK 23.11.0 00:27:11.147 00:27:11.147 User defined options 00:27:11.147 buildtype : debug 00:27:11.147 default_library : shared 00:27:11.147 libdir : lib 00:27:11.147 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:27:11.147 b_sanitize : address 00:27:11.147 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:27:11.147 c_link_args : 00:27:11.147 cpu_instruction_set: native 00:27:11.147 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:27:11.147 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:27:11.147 enable_docs : false 00:27:11.147 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:27:11.147 enable_kmods : false 00:27:11.147 tests : false 00:27:11.147 00:27:11.147 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:27:11.147 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:27:11.147 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:27:11.147 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:27:11.147 [3/265] Linking static target lib/librte_kvargs.a 00:27:11.147 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:27:11.147 [5/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:27:11.147 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:27:11.147 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:27:11.147 [8/265] Linking static target lib/librte_log.a 00:27:11.147 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:27:11.147 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:27:11.147 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:27:11.147 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:27:11.404 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:27:11.404 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:27:11.404 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:27:11.661 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:27:11.661 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:27:11.661 [18/265] Linking static target lib/librte_telemetry.a 00:27:11.661 [19/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:27:11.661 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:27:11.661 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:27:11.918 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:27:11.918 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:27:12.176 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:27:12.176 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:27:12.433 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:27:12.433 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:27:12.433 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:27:12.433 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:27:12.433 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:27:12.691 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:27:12.949 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:27:12.949 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:27:12.949 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:27:12.949 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:27:12.949 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:27:13.207 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:27:13.207 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:27:13.207 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:27:13.207 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:27:13.207 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:27:13.207 [42/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:27:13.465 [43/265] Linking target lib/librte_log.so.24.0 00:27:13.722 [44/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:27:13.722 [45/265] Linking target lib/librte_kvargs.so.24.0 00:27:13.722 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:27:13.722 [47/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:27:13.980 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:27:13.980 [49/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:27:13.980 [50/265] Linking target lib/librte_telemetry.so.24.0 00:27:13.980 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:27:13.980 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:27:13.980 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:27:14.270 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:27:14.270 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:27:14.270 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:27:14.270 [57/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:27:14.270 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:27:14.270 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:27:14.530 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:27:14.530 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:27:14.788 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:27:14.788 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:27:14.788 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:27:14.788 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:27:14.788 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:27:15.046 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:27:15.046 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:27:15.046 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:27:15.305 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:27:15.305 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:27:15.305 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:27:15.564 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:27:15.564 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:27:15.564 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:27:15.564 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:27:15.564 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:27:15.564 [78/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:27:15.821 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:27:15.821 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:27:16.080 [81/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:27:16.337 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:27:16.337 [83/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:27:16.337 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:27:16.337 [85/265] Linking static target lib/librte_ring.a 00:27:16.606 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:27:16.606 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:27:16.606 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:27:16.606 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:27:16.892 [90/265] Linking static target lib/librte_mempool.a 00:27:16.892 [91/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:27:16.892 [92/265] Linking static target lib/librte_eal.a 00:27:16.892 [93/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:27:16.892 [94/265] Linking static target lib/librte_rcu.a 00:27:17.151 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:27:17.409 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:27:17.409 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:27:17.409 [98/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:27:17.409 [99/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:27:17.667 [100/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:27:17.925 [101/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:27:17.925 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:27:17.925 [103/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:27:17.925 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:27:17.925 [105/265] Linking static target lib/librte_mbuf.a 00:27:17.925 [106/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:27:18.181 [107/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:27:18.181 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:27:18.439 [109/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:27:18.439 [110/265] Linking static target lib/librte_net.a 00:27:18.439 [111/265] Linking static target lib/librte_meter.a 00:27:18.439 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:27:18.697 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:27:18.697 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:27:18.971 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:27:19.230 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:27:19.230 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:27:19.488 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:27:19.488 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:27:19.488 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:27:19.745 [121/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:27:20.311 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:27:20.311 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:27:20.311 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:27:20.311 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:27:20.311 [126/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:27:20.311 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:27:20.569 [128/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:27:20.569 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:27:20.569 [130/265] Linking static target lib/librte_pci.a 00:27:20.569 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:27:20.826 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:27:20.826 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:27:20.826 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:27:20.826 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:27:20.826 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:27:20.826 [137/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:27:20.826 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:27:21.084 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:27:21.084 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:27:21.084 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:27:21.084 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:27:21.084 [143/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:27:21.084 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:27:21.342 [145/265] Linking static target lib/librte_cmdline.a 00:27:21.342 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:27:21.599 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:27:21.599 [148/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:27:21.857 [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:27:21.857 [150/265] Linking static target lib/librte_timer.a 00:27:21.857 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:27:21.857 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:27:22.115 [153/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:27:22.115 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:27:22.115 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:27:22.115 [156/265] Linking static target lib/librte_ethdev.a 00:27:22.375 [157/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:27:22.375 [158/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:27:22.375 [159/265] Linking static target lib/librte_compressdev.a 00:27:22.375 [160/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:27:22.633 [161/265] Linking static target lib/librte_hash.a 00:27:22.633 [162/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:27:22.633 [163/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:27:22.633 [164/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:27:22.633 [165/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:27:22.891 [166/265] Linking static target lib/librte_dmadev.a 00:27:22.891 [167/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:27:22.891 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:27:23.148 [169/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:27:23.407 [170/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:27:23.407 [171/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:27:23.407 [172/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:27:23.665 [173/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:27:23.665 [174/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:27:23.665 [175/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:27:23.665 [176/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:27:23.665 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:27:23.665 [178/265] Linking static target lib/librte_cryptodev.a 00:27:23.923 [179/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:27:23.923 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:27:23.923 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:27:24.181 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:27:24.181 [183/265] Linking static target lib/librte_power.a 00:27:24.440 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:27:24.440 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:27:24.440 [186/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:27:24.440 [187/265] Linking static target lib/librte_reorder.a 00:27:24.699 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:27:24.699 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:27:24.699 [190/265] Linking static target lib/librte_security.a 00:27:25.009 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:27:25.269 [192/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:27:25.269 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:27:25.528 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:27:25.528 [195/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:27:25.528 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:27:25.786 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:27:25.786 [198/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:27:26.045 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:27:26.045 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:27:26.045 [201/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:27:26.045 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:27:26.303 [203/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:27:26.303 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:27:26.561 [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:27:26.561 [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:27:26.561 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:27:26.561 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:27:26.561 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:27:26.819 [210/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:27:26.819 [211/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:27:26.819 [212/265] Linking static target drivers/librte_bus_vdev.a 00:27:26.819 [213/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:27:26.819 [214/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:27:26.819 [215/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:27:26.819 [216/265] Linking static target drivers/librte_bus_pci.a 00:27:27.077 [217/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:27:27.077 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:27:27.077 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:27:27.335 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:27:27.335 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:27:27.335 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:27:27.335 [223/265] Linking static target drivers/librte_mempool_ring.a 00:27:27.594 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:27:29.059 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:27:30.447 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:27:30.447 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:27:30.706 [228/265] Linking target lib/librte_eal.so.24.0 00:27:30.706 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:27:30.965 [230/265] Linking target lib/librte_dmadev.so.24.0 00:27:30.965 [231/265] Linking target lib/librte_meter.so.24.0 00:27:30.965 [232/265] Linking target lib/librte_pci.so.24.0 00:27:30.965 [233/265] Linking target lib/librte_ring.so.24.0 00:27:30.965 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:27:30.965 [235/265] Linking target lib/librte_timer.so.24.0 00:27:30.965 [236/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:27:30.965 [237/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:27:30.965 [238/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:27:30.965 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:27:30.965 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:27:31.223 [241/265] Linking target lib/librte_rcu.so.24.0 00:27:31.223 [242/265] Linking target drivers/librte_bus_pci.so.24.0 00:27:31.223 [243/265] Linking target lib/librte_mempool.so.24.0 00:27:31.223 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:27:31.223 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:27:31.482 [246/265] Linking target lib/librte_mbuf.so.24.0 00:27:31.482 [247/265] Linking target drivers/librte_mempool_ring.so.24.0 00:27:31.742 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:27:31.742 [249/265] Linking target lib/librte_net.so.24.0 00:27:31.742 [250/265] Linking target lib/librte_reorder.so.24.0 00:27:31.742 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:27:31.742 [252/265] Linking target lib/librte_compressdev.so.24.0 00:27:31.742 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:27:32.000 [254/265] Linking target lib/librte_hash.so.24.0 00:27:32.000 [255/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:27:32.000 [256/265] Linking target lib/librte_cmdline.so.24.0 00:27:32.000 [257/265] Linking target lib/librte_ethdev.so.24.0 00:27:32.000 [258/265] Linking target lib/librte_security.so.24.0 00:27:32.261 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:27:32.261 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:27:32.261 [261/265] Linking target lib/librte_power.so.24.0 00:27:33.637 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:27:33.637 [263/265] Linking static target lib/librte_vhost.a 00:27:35.542 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:27:35.542 [265/265] Linking target lib/librte_vhost.so.24.0 00:27:35.542 INFO: autodetecting backend as ninja 00:27:35.542 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:27:36.977 CC lib/log/log_flags.o 00:27:36.977 CC lib/log/log.o 00:27:36.977 CC lib/log/log_deprecated.o 00:27:36.977 CC lib/ut_mock/mock.o 00:27:36.977 CC lib/ut/ut.o 00:27:36.977 LIB libspdk_ut_mock.a 00:27:36.977 SO libspdk_ut_mock.so.6.0 00:27:36.977 LIB libspdk_log.a 00:27:36.977 LIB libspdk_ut.a 00:27:36.977 SYMLINK libspdk_ut_mock.so 00:27:36.977 SO libspdk_ut.so.2.0 00:27:36.977 SO libspdk_log.so.7.0 00:27:36.977 SYMLINK libspdk_ut.so 00:27:36.977 SYMLINK libspdk_log.so 00:27:37.235 CC lib/dma/dma.o 00:27:37.235 CC lib/ioat/ioat.o 00:27:37.235 CXX lib/trace_parser/trace.o 00:27:37.235 CC lib/util/base64.o 00:27:37.235 CC lib/util/crc16.o 00:27:37.235 CC lib/util/bit_array.o 00:27:37.235 CC lib/util/cpuset.o 00:27:37.235 CC lib/util/crc32.o 00:27:37.235 CC lib/util/crc32c.o 00:27:37.493 CC lib/vfio_user/host/vfio_user_pci.o 00:27:37.493 CC lib/util/crc32_ieee.o 00:27:37.493 CC lib/util/crc64.o 00:27:37.493 CC lib/vfio_user/host/vfio_user.o 00:27:37.493 CC lib/util/dif.o 00:27:37.493 LIB libspdk_dma.a 00:27:37.493 CC lib/util/fd.o 00:27:37.752 SO libspdk_dma.so.4.0 00:27:37.752 CC lib/util/file.o 00:27:37.752 CC lib/util/hexlify.o 00:27:37.752 SYMLINK libspdk_dma.so 00:27:37.752 CC lib/util/iov.o 00:27:37.752 CC lib/util/math.o 00:27:37.752 CC lib/util/pipe.o 00:27:37.752 CC lib/util/strerror_tls.o 00:27:37.752 LIB libspdk_ioat.a 00:27:37.752 CC lib/util/string.o 00:27:37.752 SO libspdk_ioat.so.7.0 00:27:37.752 CC lib/util/uuid.o 00:27:38.009 CC lib/util/fd_group.o 00:27:38.009 LIB libspdk_vfio_user.a 00:27:38.009 CC lib/util/xor.o 00:27:38.009 SYMLINK libspdk_ioat.so 00:27:38.009 CC lib/util/zipf.o 00:27:38.009 SO libspdk_vfio_user.so.5.0 00:27:38.009 SYMLINK libspdk_vfio_user.so 00:27:38.268 LIB libspdk_util.a 00:27:38.526 SO libspdk_util.so.9.0 00:27:38.526 LIB libspdk_trace_parser.a 00:27:38.526 SO libspdk_trace_parser.so.5.0 00:27:38.526 SYMLINK libspdk_util.so 00:27:38.785 SYMLINK libspdk_trace_parser.so 00:27:38.785 CC lib/rdma/common.o 00:27:38.785 CC lib/rdma/rdma_verbs.o 00:27:38.785 CC lib/vmd/vmd.o 00:27:38.785 CC lib/vmd/led.o 00:27:38.785 CC lib/idxd/idxd.o 00:27:38.785 CC lib/idxd/idxd_user.o 00:27:38.785 CC lib/env_dpdk/memory.o 00:27:38.785 CC lib/env_dpdk/env.o 00:27:38.785 CC lib/conf/conf.o 00:27:38.785 CC lib/json/json_parse.o 00:27:39.043 CC lib/json/json_util.o 00:27:39.043 CC lib/env_dpdk/pci.o 00:27:39.043 CC lib/env_dpdk/init.o 00:27:39.043 CC lib/json/json_write.o 00:27:39.043 LIB libspdk_rdma.a 00:27:39.043 LIB libspdk_conf.a 00:27:39.043 SO libspdk_rdma.so.6.0 00:27:39.301 SO libspdk_conf.so.6.0 00:27:39.301 SYMLINK libspdk_rdma.so 00:27:39.301 CC lib/env_dpdk/threads.o 00:27:39.301 SYMLINK libspdk_conf.so 00:27:39.301 CC lib/env_dpdk/pci_ioat.o 00:27:39.301 CC lib/env_dpdk/pci_virtio.o 00:27:39.301 CC lib/env_dpdk/pci_vmd.o 00:27:39.559 CC lib/env_dpdk/pci_idxd.o 00:27:39.559 LIB libspdk_json.a 00:27:39.559 CC lib/env_dpdk/pci_event.o 00:27:39.559 CC lib/env_dpdk/sigbus_handler.o 00:27:39.559 CC lib/env_dpdk/pci_dpdk.o 00:27:39.559 SO libspdk_json.so.6.0 00:27:39.559 LIB libspdk_idxd.a 00:27:39.559 CC lib/env_dpdk/pci_dpdk_2207.o 00:27:39.559 SO libspdk_idxd.so.12.0 00:27:39.559 SYMLINK libspdk_json.so 00:27:39.559 CC lib/env_dpdk/pci_dpdk_2211.o 00:27:39.559 LIB libspdk_vmd.a 00:27:39.559 SYMLINK libspdk_idxd.so 00:27:39.817 SO libspdk_vmd.so.6.0 00:27:39.817 SYMLINK libspdk_vmd.so 00:27:39.817 CC lib/jsonrpc/jsonrpc_server.o 00:27:39.817 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:27:39.817 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:27:39.817 CC lib/jsonrpc/jsonrpc_client.o 00:27:40.075 LIB libspdk_jsonrpc.a 00:27:40.335 SO libspdk_jsonrpc.so.6.0 00:27:40.335 SYMLINK libspdk_jsonrpc.so 00:27:40.593 CC lib/rpc/rpc.o 00:27:40.851 LIB libspdk_env_dpdk.a 00:27:40.851 SO libspdk_env_dpdk.so.14.0 00:27:40.851 LIB libspdk_rpc.a 00:27:41.109 SO libspdk_rpc.so.6.0 00:27:41.109 SYMLINK libspdk_rpc.so 00:27:41.109 SYMLINK libspdk_env_dpdk.so 00:27:41.366 CC lib/keyring/keyring_rpc.o 00:27:41.366 CC lib/trace/trace.o 00:27:41.366 CC lib/trace/trace_flags.o 00:27:41.366 CC lib/trace/trace_rpc.o 00:27:41.366 CC lib/notify/notify.o 00:27:41.366 CC lib/keyring/keyring.o 00:27:41.366 CC lib/notify/notify_rpc.o 00:27:41.625 LIB libspdk_notify.a 00:27:41.625 SO libspdk_notify.so.6.0 00:27:41.625 SYMLINK libspdk_notify.so 00:27:41.884 LIB libspdk_keyring.a 00:27:41.884 LIB libspdk_trace.a 00:27:41.884 SO libspdk_keyring.so.1.0 00:27:41.884 SO libspdk_trace.so.10.0 00:27:41.884 SYMLINK libspdk_keyring.so 00:27:41.884 SYMLINK libspdk_trace.so 00:27:42.142 CC lib/thread/thread.o 00:27:42.142 CC lib/thread/iobuf.o 00:27:42.142 CC lib/sock/sock.o 00:27:42.142 CC lib/sock/sock_rpc.o 00:27:42.763 LIB libspdk_sock.a 00:27:42.763 SO libspdk_sock.so.9.0 00:27:43.020 SYMLINK libspdk_sock.so 00:27:43.277 CC lib/nvme/nvme_ctrlr.o 00:27:43.277 CC lib/nvme/nvme_ctrlr_cmd.o 00:27:43.277 CC lib/nvme/nvme_fabric.o 00:27:43.277 CC lib/nvme/nvme_ns.o 00:27:43.277 CC lib/nvme/nvme_pcie.o 00:27:43.277 CC lib/nvme/nvme_ns_cmd.o 00:27:43.277 CC lib/nvme/nvme.o 00:27:43.277 CC lib/nvme/nvme_qpair.o 00:27:43.277 CC lib/nvme/nvme_pcie_common.o 00:27:44.210 CC lib/nvme/nvme_quirks.o 00:27:44.210 CC lib/nvme/nvme_transport.o 00:27:44.210 LIB libspdk_thread.a 00:27:44.211 CC lib/nvme/nvme_discovery.o 00:27:44.211 SO libspdk_thread.so.10.0 00:27:44.211 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:27:44.468 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:27:44.468 SYMLINK libspdk_thread.so 00:27:44.468 CC lib/nvme/nvme_tcp.o 00:27:44.468 CC lib/nvme/nvme_opal.o 00:27:44.468 CC lib/nvme/nvme_io_msg.o 00:27:44.468 CC lib/nvme/nvme_poll_group.o 00:27:44.726 CC lib/nvme/nvme_zns.o 00:27:44.985 CC lib/nvme/nvme_stubs.o 00:27:44.985 CC lib/nvme/nvme_auth.o 00:27:44.985 CC lib/nvme/nvme_cuse.o 00:27:44.985 CC lib/nvme/nvme_rdma.o 00:27:45.552 CC lib/accel/accel.o 00:27:45.552 CC lib/blob/blobstore.o 00:27:45.552 CC lib/blob/request.o 00:27:45.552 CC lib/init/json_config.o 00:27:45.552 CC lib/init/subsystem.o 00:27:45.813 CC lib/blob/zeroes.o 00:27:45.813 CC lib/init/subsystem_rpc.o 00:27:45.813 CC lib/accel/accel_rpc.o 00:27:45.813 CC lib/init/rpc.o 00:27:45.813 CC lib/accel/accel_sw.o 00:27:46.101 CC lib/blob/blob_bs_dev.o 00:27:46.101 LIB libspdk_init.a 00:27:46.101 SO libspdk_init.so.5.0 00:27:46.101 CC lib/virtio/virtio.o 00:27:46.101 CC lib/virtio/virtio_vfio_user.o 00:27:46.101 CC lib/virtio/virtio_vhost_user.o 00:27:46.360 CC lib/virtio/virtio_pci.o 00:27:46.360 SYMLINK libspdk_init.so 00:27:46.360 CC lib/event/app.o 00:27:46.360 CC lib/event/reactor.o 00:27:46.360 CC lib/event/log_rpc.o 00:27:46.360 CC lib/event/app_rpc.o 00:27:46.619 CC lib/event/scheduler_static.o 00:27:46.619 LIB libspdk_virtio.a 00:27:46.619 SO libspdk_virtio.so.7.0 00:27:46.619 LIB libspdk_nvme.a 00:27:46.877 SYMLINK libspdk_virtio.so 00:27:46.877 LIB libspdk_accel.a 00:27:46.877 SO libspdk_accel.so.15.0 00:27:46.877 SO libspdk_nvme.so.13.0 00:27:47.135 LIB libspdk_event.a 00:27:47.135 SYMLINK libspdk_accel.so 00:27:47.135 SO libspdk_event.so.13.0 00:27:47.393 SYMLINK libspdk_event.so 00:27:47.393 SYMLINK libspdk_nvme.so 00:27:47.393 CC lib/bdev/bdev_zone.o 00:27:47.393 CC lib/bdev/bdev.o 00:27:47.393 CC lib/bdev/part.o 00:27:47.393 CC lib/bdev/bdev_rpc.o 00:27:47.393 CC lib/bdev/scsi_nvme.o 00:27:49.294 LIB libspdk_blob.a 00:27:49.574 SO libspdk_blob.so.11.0 00:27:49.574 SYMLINK libspdk_blob.so 00:27:49.831 CC lib/lvol/lvol.o 00:27:49.831 CC lib/blobfs/tree.o 00:27:49.831 CC lib/blobfs/blobfs.o 00:27:50.765 LIB libspdk_bdev.a 00:27:50.765 SO libspdk_bdev.so.15.0 00:27:51.023 SYMLINK libspdk_bdev.so 00:27:51.023 LIB libspdk_blobfs.a 00:27:51.023 SO libspdk_blobfs.so.10.0 00:27:51.023 LIB libspdk_lvol.a 00:27:51.023 SYMLINK libspdk_blobfs.so 00:27:51.281 CC lib/nbd/nbd.o 00:27:51.281 CC lib/nbd/nbd_rpc.o 00:27:51.281 CC lib/scsi/dev.o 00:27:51.281 CC lib/ftl/ftl_core.o 00:27:51.281 CC lib/nvmf/ctrlr_discovery.o 00:27:51.281 CC lib/ftl/ftl_init.o 00:27:51.281 CC lib/nvmf/ctrlr.o 00:27:51.281 CC lib/scsi/lun.o 00:27:51.281 SO libspdk_lvol.so.10.0 00:27:51.281 CC lib/ublk/ublk.o 00:27:51.281 SYMLINK libspdk_lvol.so 00:27:51.281 CC lib/scsi/port.o 00:27:51.281 CC lib/scsi/scsi.o 00:27:51.538 CC lib/scsi/scsi_bdev.o 00:27:51.538 CC lib/nvmf/ctrlr_bdev.o 00:27:51.538 CC lib/nvmf/subsystem.o 00:27:51.538 CC lib/nvmf/nvmf.o 00:27:51.538 CC lib/nvmf/nvmf_rpc.o 00:27:51.538 LIB libspdk_nbd.a 00:27:51.796 SO libspdk_nbd.so.7.0 00:27:51.796 CC lib/ftl/ftl_layout.o 00:27:51.796 SYMLINK libspdk_nbd.so 00:27:51.796 CC lib/nvmf/transport.o 00:27:51.796 CC lib/ftl/ftl_debug.o 00:27:52.053 CC lib/nvmf/tcp.o 00:27:52.053 CC lib/ftl/ftl_io.o 00:27:52.312 CC lib/scsi/scsi_pr.o 00:27:52.569 CC lib/ftl/ftl_sb.o 00:27:52.569 CC lib/ublk/ublk_rpc.o 00:27:52.569 CC lib/ftl/ftl_l2p.o 00:27:52.569 LIB libspdk_ublk.a 00:27:52.569 CC lib/scsi/scsi_rpc.o 00:27:52.828 CC lib/ftl/ftl_l2p_flat.o 00:27:52.828 SO libspdk_ublk.so.3.0 00:27:52.828 CC lib/nvmf/rdma.o 00:27:52.828 SYMLINK libspdk_ublk.so 00:27:52.828 CC lib/scsi/task.o 00:27:52.828 CC lib/ftl/ftl_nv_cache.o 00:27:52.828 CC lib/ftl/ftl_band.o 00:27:52.828 CC lib/ftl/ftl_band_ops.o 00:27:53.085 CC lib/ftl/ftl_writer.o 00:27:53.085 CC lib/ftl/ftl_rq.o 00:27:53.085 CC lib/ftl/ftl_reloc.o 00:27:53.085 LIB libspdk_scsi.a 00:27:53.085 SO libspdk_scsi.so.9.0 00:27:53.342 CC lib/ftl/ftl_l2p_cache.o 00:27:53.342 CC lib/ftl/ftl_p2l.o 00:27:53.342 SYMLINK libspdk_scsi.so 00:27:53.342 CC lib/ftl/mngt/ftl_mngt.o 00:27:53.342 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:27:53.600 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:27:53.600 CC lib/iscsi/conn.o 00:27:53.600 CC lib/ftl/mngt/ftl_mngt_startup.o 00:27:53.600 CC lib/iscsi/init_grp.o 00:27:53.858 CC lib/vhost/vhost.o 00:27:53.858 CC lib/ftl/mngt/ftl_mngt_md.o 00:27:53.858 CC lib/vhost/vhost_rpc.o 00:27:53.858 CC lib/vhost/vhost_scsi.o 00:27:53.858 CC lib/ftl/mngt/ftl_mngt_misc.o 00:27:54.115 CC lib/iscsi/iscsi.o 00:27:54.116 CC lib/iscsi/md5.o 00:27:54.116 CC lib/vhost/vhost_blk.o 00:27:54.373 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:27:54.373 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:27:54.373 CC lib/iscsi/param.o 00:27:54.373 CC lib/vhost/rte_vhost_user.o 00:27:54.373 CC lib/ftl/mngt/ftl_mngt_band.o 00:27:54.634 CC lib/iscsi/portal_grp.o 00:27:54.634 CC lib/iscsi/tgt_node.o 00:27:54.634 CC lib/iscsi/iscsi_subsystem.o 00:27:54.634 CC lib/iscsi/iscsi_rpc.o 00:27:54.895 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:27:54.895 CC lib/iscsi/task.o 00:27:54.895 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:27:55.164 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:27:55.164 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:27:55.164 CC lib/ftl/utils/ftl_conf.o 00:27:55.164 CC lib/ftl/utils/ftl_md.o 00:27:55.164 CC lib/ftl/utils/ftl_mempool.o 00:27:55.164 CC lib/ftl/utils/ftl_bitmap.o 00:27:55.164 CC lib/ftl/utils/ftl_property.o 00:27:55.423 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:27:55.423 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:27:55.423 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:27:55.423 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:27:55.423 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:27:55.423 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:27:55.681 LIB libspdk_vhost.a 00:27:55.681 CC lib/ftl/upgrade/ftl_sb_v3.o 00:27:55.681 CC lib/ftl/upgrade/ftl_sb_v5.o 00:27:55.681 CC lib/ftl/nvc/ftl_nvc_dev.o 00:27:55.681 SO libspdk_vhost.so.8.0 00:27:55.681 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:27:55.681 CC lib/ftl/base/ftl_base_dev.o 00:27:55.681 CC lib/ftl/base/ftl_base_bdev.o 00:27:55.681 SYMLINK libspdk_vhost.so 00:27:55.681 CC lib/ftl/ftl_trace.o 00:27:55.939 LIB libspdk_iscsi.a 00:27:55.939 SO libspdk_iscsi.so.8.0 00:27:55.939 LIB libspdk_nvmf.a 00:27:55.939 LIB libspdk_ftl.a 00:27:56.197 SYMLINK libspdk_iscsi.so 00:27:56.197 SO libspdk_nvmf.so.18.0 00:27:56.456 SO libspdk_ftl.so.9.0 00:27:56.456 SYMLINK libspdk_nvmf.so 00:27:57.023 SYMLINK libspdk_ftl.so 00:27:57.282 CC module/env_dpdk/env_dpdk_rpc.o 00:27:57.282 CC module/sock/posix/posix.o 00:27:57.282 CC module/keyring/file/keyring.o 00:27:57.282 CC module/accel/error/accel_error.o 00:27:57.282 CC module/scheduler/dynamic/scheduler_dynamic.o 00:27:57.282 CC module/accel/iaa/accel_iaa.o 00:27:57.282 CC module/accel/ioat/accel_ioat.o 00:27:57.282 CC module/blob/bdev/blob_bdev.o 00:27:57.282 CC module/accel/dsa/accel_dsa.o 00:27:57.282 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:27:57.540 LIB libspdk_env_dpdk_rpc.a 00:27:57.540 CC module/keyring/file/keyring_rpc.o 00:27:57.540 SO libspdk_env_dpdk_rpc.so.6.0 00:27:57.540 CC module/accel/error/accel_error_rpc.o 00:27:57.540 LIB libspdk_scheduler_dpdk_governor.a 00:27:57.540 LIB libspdk_scheduler_dynamic.a 00:27:57.540 SYMLINK libspdk_env_dpdk_rpc.so 00:27:57.540 SO libspdk_scheduler_dpdk_governor.so.4.0 00:27:57.540 CC module/accel/dsa/accel_dsa_rpc.o 00:27:57.798 SO libspdk_scheduler_dynamic.so.4.0 00:27:57.798 CC module/accel/iaa/accel_iaa_rpc.o 00:27:57.798 SYMLINK libspdk_scheduler_dpdk_governor.so 00:27:57.798 LIB libspdk_blob_bdev.a 00:27:57.798 CC module/accel/ioat/accel_ioat_rpc.o 00:27:57.798 SYMLINK libspdk_scheduler_dynamic.so 00:27:57.798 SO libspdk_blob_bdev.so.11.0 00:27:57.798 LIB libspdk_accel_error.a 00:27:57.798 LIB libspdk_keyring_file.a 00:27:57.798 LIB libspdk_accel_dsa.a 00:27:57.798 SO libspdk_accel_error.so.2.0 00:27:57.798 SO libspdk_keyring_file.so.1.0 00:27:57.798 SO libspdk_accel_dsa.so.5.0 00:27:57.798 SYMLINK libspdk_blob_bdev.so 00:27:57.798 LIB libspdk_accel_iaa.a 00:27:57.798 LIB libspdk_accel_ioat.a 00:27:57.798 SYMLINK libspdk_accel_error.so 00:27:58.056 SYMLINK libspdk_keyring_file.so 00:27:58.056 SO libspdk_accel_iaa.so.3.0 00:27:58.056 SYMLINK libspdk_accel_dsa.so 00:27:58.056 SO libspdk_accel_ioat.so.6.0 00:27:58.056 CC module/scheduler/gscheduler/gscheduler.o 00:27:58.056 SYMLINK libspdk_accel_iaa.so 00:27:58.056 SYMLINK libspdk_accel_ioat.so 00:27:58.314 LIB libspdk_scheduler_gscheduler.a 00:27:58.314 CC module/bdev/gpt/gpt.o 00:27:58.314 CC module/bdev/error/vbdev_error.o 00:27:58.314 CC module/blobfs/bdev/blobfs_bdev.o 00:27:58.314 CC module/bdev/malloc/bdev_malloc.o 00:27:58.314 SO libspdk_scheduler_gscheduler.so.4.0 00:27:58.314 CC module/bdev/delay/vbdev_delay.o 00:27:58.314 CC module/bdev/lvol/vbdev_lvol.o 00:27:58.314 CC module/bdev/null/bdev_null.o 00:27:58.314 CC module/bdev/nvme/bdev_nvme.o 00:27:58.314 SYMLINK libspdk_scheduler_gscheduler.so 00:27:58.314 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:27:58.573 CC module/bdev/gpt/vbdev_gpt.o 00:27:58.573 CC module/bdev/delay/vbdev_delay_rpc.o 00:27:58.573 CC module/bdev/error/vbdev_error_rpc.o 00:27:58.573 LIB libspdk_sock_posix.a 00:27:58.833 SO libspdk_sock_posix.so.6.0 00:27:58.833 CC module/bdev/nvme/bdev_nvme_rpc.o 00:27:58.833 LIB libspdk_blobfs_bdev.a 00:27:58.833 LIB libspdk_bdev_error.a 00:27:58.833 CC module/bdev/null/bdev_null_rpc.o 00:27:58.833 SYMLINK libspdk_sock_posix.so 00:27:58.833 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:27:58.833 SO libspdk_bdev_error.so.6.0 00:27:58.833 LIB libspdk_bdev_gpt.a 00:27:58.833 SO libspdk_blobfs_bdev.so.6.0 00:27:58.833 LIB libspdk_bdev_delay.a 00:27:59.091 SO libspdk_bdev_gpt.so.6.0 00:27:59.091 SO libspdk_bdev_delay.so.6.0 00:27:59.091 SYMLINK libspdk_bdev_error.so 00:27:59.091 CC module/bdev/malloc/bdev_malloc_rpc.o 00:27:59.091 SYMLINK libspdk_blobfs_bdev.so 00:27:59.091 LIB libspdk_bdev_null.a 00:27:59.091 CC module/bdev/nvme/nvme_rpc.o 00:27:59.091 CC module/bdev/nvme/bdev_mdns_client.o 00:27:59.091 SYMLINK libspdk_bdev_gpt.so 00:27:59.091 SO libspdk_bdev_null.so.6.0 00:27:59.091 SYMLINK libspdk_bdev_delay.so 00:27:59.349 SYMLINK libspdk_bdev_null.so 00:27:59.349 LIB libspdk_bdev_malloc.a 00:27:59.349 CC module/bdev/passthru/vbdev_passthru.o 00:27:59.349 CC module/bdev/raid/bdev_raid.o 00:27:59.349 SO libspdk_bdev_malloc.so.6.0 00:27:59.608 LIB libspdk_bdev_lvol.a 00:27:59.608 CC module/bdev/zone_block/vbdev_zone_block.o 00:27:59.608 SO libspdk_bdev_lvol.so.6.0 00:27:59.608 CC module/bdev/split/vbdev_split.o 00:27:59.608 CC module/bdev/raid/bdev_raid_rpc.o 00:27:59.608 CC module/bdev/nvme/vbdev_opal.o 00:27:59.608 SYMLINK libspdk_bdev_malloc.so 00:27:59.608 CC module/bdev/xnvme/bdev_xnvme.o 00:27:59.608 SYMLINK libspdk_bdev_lvol.so 00:27:59.608 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:27:59.866 CC module/bdev/nvme/vbdev_opal_rpc.o 00:27:59.866 CC module/bdev/aio/bdev_aio.o 00:27:59.866 CC module/bdev/raid/bdev_raid_sb.o 00:27:59.866 LIB libspdk_bdev_passthru.a 00:28:00.125 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:28:00.125 SO libspdk_bdev_passthru.so.6.0 00:28:00.125 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:28:00.125 CC module/bdev/split/vbdev_split_rpc.o 00:28:00.125 CC module/bdev/raid/raid0.o 00:28:00.125 SYMLINK libspdk_bdev_passthru.so 00:28:00.126 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:28:00.384 LIB libspdk_bdev_zone_block.a 00:28:00.384 SO libspdk_bdev_zone_block.so.6.0 00:28:00.384 LIB libspdk_bdev_split.a 00:28:00.384 SO libspdk_bdev_split.so.6.0 00:28:00.384 CC module/bdev/ftl/bdev_ftl.o 00:28:00.384 LIB libspdk_bdev_xnvme.a 00:28:00.384 CC module/bdev/aio/bdev_aio_rpc.o 00:28:00.384 CC module/bdev/ftl/bdev_ftl_rpc.o 00:28:00.384 SO libspdk_bdev_xnvme.so.3.0 00:28:00.384 SYMLINK libspdk_bdev_zone_block.so 00:28:00.384 CC module/bdev/raid/raid1.o 00:28:00.641 CC module/bdev/iscsi/bdev_iscsi.o 00:28:00.641 SYMLINK libspdk_bdev_split.so 00:28:00.641 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:28:00.641 SYMLINK libspdk_bdev_xnvme.so 00:28:00.641 CC module/bdev/raid/concat.o 00:28:00.641 CC module/bdev/virtio/bdev_virtio_scsi.o 00:28:00.641 LIB libspdk_bdev_aio.a 00:28:00.641 SO libspdk_bdev_aio.so.6.0 00:28:00.899 SYMLINK libspdk_bdev_aio.so 00:28:00.899 CC module/bdev/virtio/bdev_virtio_blk.o 00:28:00.899 CC module/bdev/virtio/bdev_virtio_rpc.o 00:28:00.899 LIB libspdk_bdev_ftl.a 00:28:01.161 SO libspdk_bdev_ftl.so.6.0 00:28:01.161 LIB libspdk_bdev_iscsi.a 00:28:01.161 SO libspdk_bdev_iscsi.so.6.0 00:28:01.161 SYMLINK libspdk_bdev_ftl.so 00:28:01.161 SYMLINK libspdk_bdev_iscsi.so 00:28:01.426 LIB libspdk_bdev_virtio.a 00:28:01.426 LIB libspdk_bdev_raid.a 00:28:01.426 SO libspdk_bdev_virtio.so.6.0 00:28:01.426 SO libspdk_bdev_raid.so.6.0 00:28:01.684 SYMLINK libspdk_bdev_raid.so 00:28:01.684 LIB libspdk_bdev_nvme.a 00:28:01.684 SYMLINK libspdk_bdev_virtio.so 00:28:01.684 SO libspdk_bdev_nvme.so.7.0 00:28:01.941 SYMLINK libspdk_bdev_nvme.so 00:28:02.506 CC module/event/subsystems/vmd/vmd.o 00:28:02.506 CC module/event/subsystems/iobuf/iobuf.o 00:28:02.506 CC module/event/subsystems/vmd/vmd_rpc.o 00:28:02.506 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:28:02.506 CC module/event/subsystems/keyring/keyring.o 00:28:02.506 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:28:02.506 CC module/event/subsystems/scheduler/scheduler.o 00:28:02.506 CC module/event/subsystems/sock/sock.o 00:28:02.506 LIB libspdk_event_scheduler.a 00:28:02.506 LIB libspdk_event_keyring.a 00:28:02.506 SO libspdk_event_scheduler.so.4.0 00:28:02.506 LIB libspdk_event_vhost_blk.a 00:28:02.763 LIB libspdk_event_vmd.a 00:28:02.763 LIB libspdk_event_iobuf.a 00:28:02.763 LIB libspdk_event_sock.a 00:28:02.763 SO libspdk_event_keyring.so.1.0 00:28:02.763 SO libspdk_event_vhost_blk.so.3.0 00:28:02.763 SO libspdk_event_vmd.so.6.0 00:28:02.763 SO libspdk_event_iobuf.so.3.0 00:28:02.763 SO libspdk_event_sock.so.5.0 00:28:02.763 SYMLINK libspdk_event_scheduler.so 00:28:02.763 SYMLINK libspdk_event_keyring.so 00:28:02.763 SYMLINK libspdk_event_vhost_blk.so 00:28:02.763 SYMLINK libspdk_event_sock.so 00:28:02.763 SYMLINK libspdk_event_iobuf.so 00:28:02.763 SYMLINK libspdk_event_vmd.so 00:28:03.020 CC module/event/subsystems/accel/accel.o 00:28:03.278 LIB libspdk_event_accel.a 00:28:03.278 SO libspdk_event_accel.so.6.0 00:28:03.278 SYMLINK libspdk_event_accel.so 00:28:03.841 CC module/event/subsystems/bdev/bdev.o 00:28:03.841 LIB libspdk_event_bdev.a 00:28:03.841 SO libspdk_event_bdev.so.6.0 00:28:04.099 SYMLINK libspdk_event_bdev.so 00:28:04.099 CC module/event/subsystems/nbd/nbd.o 00:28:04.099 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:28:04.099 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:28:04.099 CC module/event/subsystems/scsi/scsi.o 00:28:04.099 CC module/event/subsystems/ublk/ublk.o 00:28:04.356 LIB libspdk_event_nbd.a 00:28:04.356 SO libspdk_event_nbd.so.6.0 00:28:04.356 LIB libspdk_event_ublk.a 00:28:04.356 LIB libspdk_event_scsi.a 00:28:04.356 SO libspdk_event_ublk.so.3.0 00:28:04.356 SYMLINK libspdk_event_nbd.so 00:28:04.613 SO libspdk_event_scsi.so.6.0 00:28:04.613 LIB libspdk_event_nvmf.a 00:28:04.613 SYMLINK libspdk_event_ublk.so 00:28:04.613 SYMLINK libspdk_event_scsi.so 00:28:04.613 SO libspdk_event_nvmf.so.6.0 00:28:04.613 SYMLINK libspdk_event_nvmf.so 00:28:04.870 CC module/event/subsystems/iscsi/iscsi.o 00:28:04.870 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:28:05.128 LIB libspdk_event_vhost_scsi.a 00:28:05.128 SO libspdk_event_vhost_scsi.so.3.0 00:28:05.128 LIB libspdk_event_iscsi.a 00:28:05.128 SO libspdk_event_iscsi.so.6.0 00:28:05.128 SYMLINK libspdk_event_vhost_scsi.so 00:28:05.128 SYMLINK libspdk_event_iscsi.so 00:28:05.386 SO libspdk.so.6.0 00:28:05.386 SYMLINK libspdk.so 00:28:05.644 CC app/trace_record/trace_record.o 00:28:05.644 CC app/spdk_lspci/spdk_lspci.o 00:28:05.644 CXX app/trace/trace.o 00:28:05.644 CC app/iscsi_tgt/iscsi_tgt.o 00:28:05.644 CC examples/accel/perf/accel_perf.o 00:28:05.644 CC app/nvmf_tgt/nvmf_main.o 00:28:05.644 CC app/spdk_tgt/spdk_tgt.o 00:28:05.901 CC examples/bdev/hello_world/hello_bdev.o 00:28:05.901 CC test/app/bdev_svc/bdev_svc.o 00:28:05.901 CC test/accel/dif/dif.o 00:28:05.901 LINK spdk_lspci 00:28:05.901 LINK iscsi_tgt 00:28:06.158 LINK nvmf_tgt 00:28:06.158 LINK spdk_trace_record 00:28:06.158 LINK spdk_tgt 00:28:06.158 LINK bdev_svc 00:28:06.416 LINK hello_bdev 00:28:06.416 CC examples/bdev/bdevperf/bdevperf.o 00:28:06.416 LINK spdk_trace 00:28:06.416 LINK accel_perf 00:28:06.416 CC app/spdk_nvme_perf/perf.o 00:28:06.684 LINK dif 00:28:06.684 CC examples/ioat/perf/perf.o 00:28:06.684 CC examples/blob/hello_world/hello_blob.o 00:28:06.684 CC test/app/histogram_perf/histogram_perf.o 00:28:06.684 CC test/app/jsoncat/jsoncat.o 00:28:06.684 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:28:06.942 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:28:06.942 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:28:06.942 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:28:06.942 LINK histogram_perf 00:28:06.942 LINK ioat_perf 00:28:06.942 LINK jsoncat 00:28:06.942 LINK hello_blob 00:28:07.200 TEST_HEADER include/spdk/accel.h 00:28:07.200 TEST_HEADER include/spdk/accel_module.h 00:28:07.200 TEST_HEADER include/spdk/assert.h 00:28:07.200 TEST_HEADER include/spdk/barrier.h 00:28:07.200 TEST_HEADER include/spdk/base64.h 00:28:07.200 TEST_HEADER include/spdk/bdev.h 00:28:07.200 TEST_HEADER include/spdk/bdev_module.h 00:28:07.200 TEST_HEADER include/spdk/bdev_zone.h 00:28:07.200 TEST_HEADER include/spdk/bit_array.h 00:28:07.200 CC examples/ioat/verify/verify.o 00:28:07.200 TEST_HEADER include/spdk/bit_pool.h 00:28:07.200 TEST_HEADER include/spdk/blob_bdev.h 00:28:07.200 TEST_HEADER include/spdk/blobfs_bdev.h 00:28:07.200 TEST_HEADER include/spdk/blobfs.h 00:28:07.200 CC test/bdev/bdevio/bdevio.o 00:28:07.200 TEST_HEADER include/spdk/blob.h 00:28:07.200 TEST_HEADER include/spdk/conf.h 00:28:07.200 TEST_HEADER include/spdk/config.h 00:28:07.200 TEST_HEADER include/spdk/cpuset.h 00:28:07.200 TEST_HEADER include/spdk/crc16.h 00:28:07.200 TEST_HEADER include/spdk/crc32.h 00:28:07.200 LINK nvme_fuzz 00:28:07.200 TEST_HEADER include/spdk/crc64.h 00:28:07.200 TEST_HEADER include/spdk/dif.h 00:28:07.200 TEST_HEADER include/spdk/dma.h 00:28:07.200 TEST_HEADER include/spdk/endian.h 00:28:07.200 TEST_HEADER include/spdk/env_dpdk.h 00:28:07.200 TEST_HEADER include/spdk/env.h 00:28:07.200 TEST_HEADER include/spdk/event.h 00:28:07.200 TEST_HEADER include/spdk/fd_group.h 00:28:07.200 TEST_HEADER include/spdk/fd.h 00:28:07.200 TEST_HEADER include/spdk/file.h 00:28:07.200 TEST_HEADER include/spdk/ftl.h 00:28:07.200 TEST_HEADER include/spdk/gpt_spec.h 00:28:07.200 TEST_HEADER include/spdk/hexlify.h 00:28:07.200 CC test/blobfs/mkfs/mkfs.o 00:28:07.200 TEST_HEADER include/spdk/histogram_data.h 00:28:07.200 TEST_HEADER include/spdk/idxd.h 00:28:07.200 TEST_HEADER include/spdk/idxd_spec.h 00:28:07.200 TEST_HEADER include/spdk/init.h 00:28:07.200 TEST_HEADER include/spdk/ioat.h 00:28:07.200 TEST_HEADER include/spdk/ioat_spec.h 00:28:07.200 TEST_HEADER include/spdk/iscsi_spec.h 00:28:07.200 TEST_HEADER include/spdk/json.h 00:28:07.200 TEST_HEADER include/spdk/jsonrpc.h 00:28:07.200 TEST_HEADER include/spdk/keyring.h 00:28:07.458 TEST_HEADER include/spdk/keyring_module.h 00:28:07.458 TEST_HEADER include/spdk/likely.h 00:28:07.458 TEST_HEADER include/spdk/log.h 00:28:07.458 TEST_HEADER include/spdk/lvol.h 00:28:07.458 TEST_HEADER include/spdk/memory.h 00:28:07.458 TEST_HEADER include/spdk/mmio.h 00:28:07.458 TEST_HEADER include/spdk/nbd.h 00:28:07.458 TEST_HEADER include/spdk/notify.h 00:28:07.458 TEST_HEADER include/spdk/nvme.h 00:28:07.458 TEST_HEADER include/spdk/nvme_intel.h 00:28:07.458 TEST_HEADER include/spdk/nvme_ocssd.h 00:28:07.458 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:28:07.458 TEST_HEADER include/spdk/nvme_spec.h 00:28:07.458 TEST_HEADER include/spdk/nvme_zns.h 00:28:07.458 TEST_HEADER include/spdk/nvmf_cmd.h 00:28:07.458 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:28:07.458 TEST_HEADER include/spdk/nvmf.h 00:28:07.458 TEST_HEADER include/spdk/nvmf_spec.h 00:28:07.458 TEST_HEADER include/spdk/nvmf_transport.h 00:28:07.458 TEST_HEADER include/spdk/opal.h 00:28:07.458 TEST_HEADER include/spdk/opal_spec.h 00:28:07.458 TEST_HEADER include/spdk/pci_ids.h 00:28:07.458 TEST_HEADER include/spdk/pipe.h 00:28:07.458 TEST_HEADER include/spdk/queue.h 00:28:07.458 TEST_HEADER include/spdk/reduce.h 00:28:07.458 TEST_HEADER include/spdk/rpc.h 00:28:07.458 TEST_HEADER include/spdk/scheduler.h 00:28:07.458 TEST_HEADER include/spdk/scsi.h 00:28:07.458 TEST_HEADER include/spdk/scsi_spec.h 00:28:07.458 TEST_HEADER include/spdk/sock.h 00:28:07.458 LINK vhost_fuzz 00:28:07.458 TEST_HEADER include/spdk/stdinc.h 00:28:07.458 TEST_HEADER include/spdk/string.h 00:28:07.458 TEST_HEADER include/spdk/thread.h 00:28:07.458 TEST_HEADER include/spdk/trace.h 00:28:07.458 TEST_HEADER include/spdk/trace_parser.h 00:28:07.458 CC examples/blob/cli/blobcli.o 00:28:07.458 TEST_HEADER include/spdk/tree.h 00:28:07.458 TEST_HEADER include/spdk/ublk.h 00:28:07.458 TEST_HEADER include/spdk/util.h 00:28:07.458 TEST_HEADER include/spdk/uuid.h 00:28:07.458 TEST_HEADER include/spdk/version.h 00:28:07.458 TEST_HEADER include/spdk/vfio_user_pci.h 00:28:07.458 TEST_HEADER include/spdk/vfio_user_spec.h 00:28:07.458 TEST_HEADER include/spdk/vhost.h 00:28:07.458 TEST_HEADER include/spdk/vmd.h 00:28:07.458 TEST_HEADER include/spdk/xor.h 00:28:07.458 TEST_HEADER include/spdk/zipf.h 00:28:07.458 CXX test/cpp_headers/accel.o 00:28:07.458 CXX test/cpp_headers/accel_module.o 00:28:07.458 LINK bdevperf 00:28:07.459 LINK mkfs 00:28:07.459 LINK spdk_nvme_perf 00:28:07.717 LINK verify 00:28:07.717 CXX test/cpp_headers/assert.o 00:28:07.717 LINK bdevio 00:28:07.976 CC examples/nvme/hello_world/hello_world.o 00:28:07.976 CC examples/nvme/reconnect/reconnect.o 00:28:07.976 CC app/spdk_nvme_identify/identify.o 00:28:07.976 CC examples/sock/hello_world/hello_sock.o 00:28:07.976 CXX test/cpp_headers/barrier.o 00:28:07.976 CC examples/vmd/lsvmd/lsvmd.o 00:28:07.976 CC examples/vmd/led/led.o 00:28:07.976 LINK blobcli 00:28:07.976 CXX test/cpp_headers/base64.o 00:28:08.233 LINK lsvmd 00:28:08.233 LINK led 00:28:08.233 LINK hello_world 00:28:08.233 CXX test/cpp_headers/bdev.o 00:28:08.233 CXX test/cpp_headers/bdev_module.o 00:28:08.233 CC examples/nvmf/nvmf/nvmf.o 00:28:08.233 CXX test/cpp_headers/bdev_zone.o 00:28:08.492 LINK hello_sock 00:28:08.492 CXX test/cpp_headers/bit_array.o 00:28:08.492 CXX test/cpp_headers/bit_pool.o 00:28:08.492 LINK reconnect 00:28:08.492 CC examples/nvme/nvme_manage/nvme_manage.o 00:28:08.750 CC examples/util/zipf/zipf.o 00:28:08.750 CXX test/cpp_headers/blob_bdev.o 00:28:08.750 LINK nvmf 00:28:08.750 CC test/event/event_perf/event_perf.o 00:28:08.750 CC test/dma/test_dma/test_dma.o 00:28:09.009 CC test/event/reactor/reactor.o 00:28:09.009 LINK zipf 00:28:09.009 CC test/env/mem_callbacks/mem_callbacks.o 00:28:09.009 LINK spdk_nvme_identify 00:28:09.009 CXX test/cpp_headers/blobfs_bdev.o 00:28:09.009 LINK event_perf 00:28:09.009 LINK reactor 00:28:09.009 CXX test/cpp_headers/blobfs.o 00:28:09.285 LINK iscsi_fuzz 00:28:09.285 CC examples/nvme/arbitration/arbitration.o 00:28:09.285 CC app/spdk_nvme_discover/discovery_aer.o 00:28:09.285 CC examples/nvme/hotplug/hotplug.o 00:28:09.285 CXX test/cpp_headers/blob.o 00:28:09.285 LINK test_dma 00:28:09.285 LINK nvme_manage 00:28:09.285 CC examples/nvme/cmb_copy/cmb_copy.o 00:28:09.543 CC test/event/reactor_perf/reactor_perf.o 00:28:09.543 CXX test/cpp_headers/conf.o 00:28:09.543 CXX test/cpp_headers/config.o 00:28:09.543 LINK spdk_nvme_discover 00:28:09.543 LINK mem_callbacks 00:28:09.543 LINK hotplug 00:28:09.543 LINK cmb_copy 00:28:09.543 LINK reactor_perf 00:28:09.543 CC test/app/stub/stub.o 00:28:09.801 LINK arbitration 00:28:09.801 CXX test/cpp_headers/cpuset.o 00:28:09.801 LINK stub 00:28:09.801 CC examples/thread/thread/thread_ex.o 00:28:10.059 CC app/spdk_top/spdk_top.o 00:28:10.059 CXX test/cpp_headers/crc16.o 00:28:10.059 CC test/event/app_repeat/app_repeat.o 00:28:10.059 CC test/env/vtophys/vtophys.o 00:28:10.059 CC test/lvol/esnap/esnap.o 00:28:10.059 CC test/event/scheduler/scheduler.o 00:28:10.059 CC examples/nvme/abort/abort.o 00:28:10.059 CXX test/cpp_headers/crc32.o 00:28:10.059 LINK vtophys 00:28:10.059 LINK app_repeat 00:28:10.059 CC examples/idxd/perf/perf.o 00:28:10.059 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:28:10.316 LINK thread 00:28:10.316 CXX test/cpp_headers/crc64.o 00:28:10.316 LINK scheduler 00:28:10.316 LINK pmr_persistence 00:28:10.574 CXX test/cpp_headers/dif.o 00:28:10.574 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:28:10.574 LINK idxd_perf 00:28:10.574 CC examples/interrupt_tgt/interrupt_tgt.o 00:28:10.574 LINK abort 00:28:10.574 CXX test/cpp_headers/dma.o 00:28:10.832 CC test/rpc_client/rpc_client_test.o 00:28:10.832 LINK env_dpdk_post_init 00:28:10.832 CXX test/cpp_headers/endian.o 00:28:10.832 CC test/nvme/aer/aer.o 00:28:10.832 CC test/thread/poller_perf/poller_perf.o 00:28:10.832 CC test/nvme/reset/reset.o 00:28:10.832 LINK interrupt_tgt 00:28:11.090 CC test/env/memory/memory_ut.o 00:28:11.090 CXX test/cpp_headers/env_dpdk.o 00:28:11.090 LINK rpc_client_test 00:28:11.090 LINK poller_perf 00:28:11.090 CC app/vhost/vhost.o 00:28:11.090 LINK reset 00:28:11.348 LINK aer 00:28:11.348 CXX test/cpp_headers/env.o 00:28:11.348 LINK vhost 00:28:11.348 CC test/nvme/sgl/sgl.o 00:28:11.606 CC app/spdk_dd/spdk_dd.o 00:28:11.606 CC test/nvme/e2edp/nvme_dp.o 00:28:11.606 LINK spdk_top 00:28:11.606 CC app/fio/nvme/fio_plugin.o 00:28:11.606 CXX test/cpp_headers/event.o 00:28:11.606 CC app/fio/bdev/fio_plugin.o 00:28:11.864 CXX test/cpp_headers/fd_group.o 00:28:11.864 LINK sgl 00:28:11.864 CXX test/cpp_headers/fd.o 00:28:11.864 CXX test/cpp_headers/file.o 00:28:12.121 CXX test/cpp_headers/ftl.o 00:28:12.121 LINK spdk_dd 00:28:12.121 CC test/env/pci/pci_ut.o 00:28:12.121 CXX test/cpp_headers/gpt_spec.o 00:28:12.121 LINK nvme_dp 00:28:12.379 LINK spdk_bdev 00:28:12.379 CC test/nvme/overhead/overhead.o 00:28:12.379 CXX test/cpp_headers/hexlify.o 00:28:12.379 LINK memory_ut 00:28:12.379 CXX test/cpp_headers/histogram_data.o 00:28:12.379 CC test/nvme/err_injection/err_injection.o 00:28:12.636 CC test/nvme/startup/startup.o 00:28:12.636 CXX test/cpp_headers/idxd.o 00:28:12.636 CC test/nvme/reserve/reserve.o 00:28:12.636 LINK pci_ut 00:28:12.636 LINK err_injection 00:28:12.636 LINK spdk_nvme 00:28:12.636 CC test/nvme/simple_copy/simple_copy.o 00:28:12.636 LINK overhead 00:28:12.893 CC test/nvme/connect_stress/connect_stress.o 00:28:12.893 LINK startup 00:28:12.893 CXX test/cpp_headers/idxd_spec.o 00:28:12.893 CC test/nvme/boot_partition/boot_partition.o 00:28:12.893 CC test/nvme/compliance/nvme_compliance.o 00:28:12.893 LINK simple_copy 00:28:13.151 LINK reserve 00:28:13.151 CC test/nvme/fused_ordering/fused_ordering.o 00:28:13.151 LINK boot_partition 00:28:13.151 CXX test/cpp_headers/init.o 00:28:13.151 LINK connect_stress 00:28:13.409 CC test/nvme/doorbell_aers/doorbell_aers.o 00:28:13.409 CXX test/cpp_headers/ioat.o 00:28:13.409 CXX test/cpp_headers/ioat_spec.o 00:28:13.409 CXX test/cpp_headers/iscsi_spec.o 00:28:13.409 CC test/nvme/fdp/fdp.o 00:28:13.409 CC test/nvme/cuse/cuse.o 00:28:13.409 LINK fused_ordering 00:28:13.666 CXX test/cpp_headers/json.o 00:28:13.666 CXX test/cpp_headers/jsonrpc.o 00:28:13.666 CXX test/cpp_headers/keyring.o 00:28:13.666 CXX test/cpp_headers/keyring_module.o 00:28:13.666 LINK doorbell_aers 00:28:13.666 LINK nvme_compliance 00:28:13.666 CXX test/cpp_headers/likely.o 00:28:13.666 CXX test/cpp_headers/log.o 00:28:13.924 CXX test/cpp_headers/lvol.o 00:28:13.924 CXX test/cpp_headers/memory.o 00:28:13.924 CXX test/cpp_headers/mmio.o 00:28:13.924 LINK fdp 00:28:13.924 CXX test/cpp_headers/nbd.o 00:28:13.924 CXX test/cpp_headers/notify.o 00:28:13.924 CXX test/cpp_headers/nvme.o 00:28:13.924 CXX test/cpp_headers/nvme_intel.o 00:28:13.924 CXX test/cpp_headers/nvme_ocssd.o 00:28:13.924 CXX test/cpp_headers/nvme_ocssd_spec.o 00:28:14.182 CXX test/cpp_headers/nvme_spec.o 00:28:14.182 CXX test/cpp_headers/nvme_zns.o 00:28:14.182 CXX test/cpp_headers/nvmf_cmd.o 00:28:14.182 CXX test/cpp_headers/nvmf_fc_spec.o 00:28:14.182 CXX test/cpp_headers/nvmf.o 00:28:14.182 CXX test/cpp_headers/nvmf_spec.o 00:28:14.182 CXX test/cpp_headers/nvmf_transport.o 00:28:14.182 CXX test/cpp_headers/opal.o 00:28:14.440 CXX test/cpp_headers/opal_spec.o 00:28:14.440 CXX test/cpp_headers/pci_ids.o 00:28:14.440 CXX test/cpp_headers/pipe.o 00:28:14.440 CXX test/cpp_headers/queue.o 00:28:14.440 CXX test/cpp_headers/reduce.o 00:28:14.440 CXX test/cpp_headers/rpc.o 00:28:14.440 CXX test/cpp_headers/scheduler.o 00:28:14.440 CXX test/cpp_headers/scsi.o 00:28:14.440 CXX test/cpp_headers/scsi_spec.o 00:28:14.440 CXX test/cpp_headers/sock.o 00:28:14.699 CXX test/cpp_headers/stdinc.o 00:28:14.699 CXX test/cpp_headers/string.o 00:28:14.699 CXX test/cpp_headers/thread.o 00:28:14.699 CXX test/cpp_headers/trace.o 00:28:14.699 CXX test/cpp_headers/trace_parser.o 00:28:14.699 CXX test/cpp_headers/tree.o 00:28:14.699 CXX test/cpp_headers/ublk.o 00:28:14.699 CXX test/cpp_headers/util.o 00:28:14.699 CXX test/cpp_headers/uuid.o 00:28:14.956 CXX test/cpp_headers/version.o 00:28:14.957 LINK cuse 00:28:14.957 CXX test/cpp_headers/vfio_user_pci.o 00:28:14.957 CXX test/cpp_headers/vfio_user_spec.o 00:28:14.957 CXX test/cpp_headers/vhost.o 00:28:14.957 CXX test/cpp_headers/vmd.o 00:28:14.957 CXX test/cpp_headers/xor.o 00:28:14.957 CXX test/cpp_headers/zipf.o 00:28:17.488 LINK esnap 00:28:18.054 00:28:18.054 real 1m22.954s 00:28:18.054 user 7m46.189s 00:28:18.054 sys 2m8.168s 00:28:18.054 08:57:19 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:28:18.054 ************************************ 00:28:18.054 END TEST make 00:28:18.054 08:57:19 -- common/autotest_common.sh@10 -- $ set +x 00:28:18.054 ************************************ 00:28:18.054 08:57:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:28:18.054 08:57:19 -- pm/common@30 -- $ signal_monitor_resources TERM 00:28:18.054 08:57:19 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:28:18.054 08:57:19 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:18.054 08:57:19 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:28:18.054 08:57:19 -- pm/common@45 -- $ pid=5103 00:28:18.054 08:57:19 -- pm/common@52 -- $ sudo kill -TERM 5103 00:28:18.054 08:57:20 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:18.054 08:57:20 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:28:18.054 08:57:20 -- pm/common@45 -- $ pid=5102 00:28:18.054 08:57:20 -- pm/common@52 -- $ sudo kill -TERM 5102 00:28:18.311 08:57:20 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:18.311 08:57:20 -- nvmf/common.sh@7 -- # uname -s 00:28:18.311 08:57:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:18.311 08:57:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:18.311 08:57:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:18.311 08:57:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:18.311 08:57:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:18.311 08:57:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:18.311 08:57:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:18.311 08:57:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:18.311 08:57:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:18.311 08:57:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:18.311 08:57:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9d44ceec-ac23-4e8e-aa1f-66e7850cb740 00:28:18.311 08:57:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=9d44ceec-ac23-4e8e-aa1f-66e7850cb740 00:28:18.311 08:57:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:18.311 08:57:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:18.311 08:57:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:18.311 08:57:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:18.311 08:57:20 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:18.311 08:57:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:18.311 08:57:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.311 08:57:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.311 08:57:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.311 08:57:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.311 08:57:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.311 08:57:20 -- paths/export.sh@5 -- # export PATH 00:28:18.311 08:57:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.311 08:57:20 -- nvmf/common.sh@47 -- # : 0 00:28:18.311 08:57:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:18.311 08:57:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:18.311 08:57:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:18.311 08:57:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:18.311 08:57:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:18.312 08:57:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:18.312 08:57:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:18.312 08:57:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:18.312 08:57:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:28:18.312 08:57:20 -- spdk/autotest.sh@32 -- # uname -s 00:28:18.312 08:57:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:28:18.312 08:57:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:28:18.312 08:57:20 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:28:18.312 08:57:20 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:28:18.312 08:57:20 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:28:18.312 08:57:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:28:18.312 08:57:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:28:18.312 08:57:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:28:18.312 08:57:20 -- spdk/autotest.sh@48 -- # udevadm_pid=53092 00:28:18.312 08:57:20 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:28:18.312 08:57:20 -- pm/common@17 -- # local monitor 00:28:18.312 08:57:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:28:18.312 08:57:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:28:18.312 08:57:20 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=53097 00:28:18.312 08:57:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:28:18.312 08:57:20 -- pm/common@21 -- # date +%s 00:28:18.312 08:57:20 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=53099 00:28:18.312 08:57:20 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713430640 00:28:18.312 08:57:20 -- pm/common@26 -- # sleep 1 00:28:18.312 08:57:20 -- pm/common@21 -- # date +%s 00:28:18.312 08:57:20 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713430640 00:28:18.312 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713430640_collect-vmstat.pm.log 00:28:18.312 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713430640_collect-cpu-load.pm.log 00:28:19.247 08:57:21 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:28:19.247 08:57:21 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:28:19.247 08:57:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:19.247 08:57:21 -- common/autotest_common.sh@10 -- # set +x 00:28:19.247 08:57:21 -- spdk/autotest.sh@59 -- # create_test_list 00:28:19.247 08:57:21 -- common/autotest_common.sh@734 -- # xtrace_disable 00:28:19.247 08:57:21 -- common/autotest_common.sh@10 -- # set +x 00:28:19.247 08:57:21 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:28:19.247 08:57:21 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:28:19.247 08:57:21 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:28:19.247 08:57:21 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:28:19.247 08:57:21 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:28:19.247 08:57:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:28:19.247 08:57:21 -- common/autotest_common.sh@1441 -- # uname 00:28:19.247 08:57:21 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:28:19.247 08:57:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:28:19.247 08:57:21 -- common/autotest_common.sh@1461 -- # uname 00:28:19.247 08:57:21 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:28:19.247 08:57:21 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:28:19.247 08:57:21 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:28:19.247 08:57:21 -- spdk/autotest.sh@72 -- # hash lcov 00:28:19.247 08:57:21 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:19.247 08:57:21 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:28:19.247 --rc lcov_branch_coverage=1 00:28:19.247 --rc lcov_function_coverage=1 00:28:19.247 --rc genhtml_branch_coverage=1 00:28:19.247 --rc genhtml_function_coverage=1 00:28:19.247 --rc genhtml_legend=1 00:28:19.247 --rc geninfo_all_blocks=1 00:28:19.247 ' 00:28:19.247 08:57:21 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:28:19.247 --rc lcov_branch_coverage=1 00:28:19.247 --rc lcov_function_coverage=1 00:28:19.248 --rc genhtml_branch_coverage=1 00:28:19.248 --rc genhtml_function_coverage=1 00:28:19.248 --rc genhtml_legend=1 00:28:19.248 --rc geninfo_all_blocks=1 00:28:19.248 ' 00:28:19.248 08:57:21 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:28:19.248 --rc lcov_branch_coverage=1 00:28:19.248 --rc lcov_function_coverage=1 00:28:19.248 --rc genhtml_branch_coverage=1 00:28:19.248 --rc genhtml_function_coverage=1 00:28:19.248 --rc genhtml_legend=1 00:28:19.248 --rc geninfo_all_blocks=1 00:28:19.248 --no-external' 00:28:19.248 08:57:21 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:28:19.248 --rc lcov_branch_coverage=1 00:28:19.248 --rc lcov_function_coverage=1 00:28:19.248 --rc genhtml_branch_coverage=1 00:28:19.248 --rc genhtml_function_coverage=1 00:28:19.248 --rc genhtml_legend=1 00:28:19.248 --rc geninfo_all_blocks=1 00:28:19.248 --no-external' 00:28:19.248 08:57:21 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:28:19.506 lcov: LCOV version 1.14 00:28:19.506 08:57:21 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:28:29.487 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:28:29.487 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:28:29.487 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:28:29.487 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:28:29.487 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:28:29.487 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:28:36.056 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:28:36.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:28:51.013 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:28:51.013 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:28:51.014 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:28:51.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:28:52.914 08:57:54 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:28:52.914 08:57:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:52.914 08:57:54 -- common/autotest_common.sh@10 -- # set +x 00:28:52.914 08:57:54 -- spdk/autotest.sh@91 -- # rm -f 00:28:52.914 08:57:54 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:53.481 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:54.048 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:54.048 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:54.048 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:54.048 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:54.048 08:57:56 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:28:54.048 08:57:56 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:28:54.048 08:57:56 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:28:54.048 08:57:56 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:28:54.048 08:57:56 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:28:54.049 08:57:56 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:28:54.049 08:57:56 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:28:54.049 08:57:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:54.049 08:57:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:54.049 08:57:56 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:28:54.049 08:57:56 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:28:54.049 08:57:56 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:28:54.049 08:57:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:54.049 08:57:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:54.049 08:57:56 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:28:54.049 08:57:56 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:28:54.049 08:57:56 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:28:54.049 08:57:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:28:54.049 08:57:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:54.049 08:57:56 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:28:54.049 08:57:56 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:28:54.049 08:57:56 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:28:54.049 08:57:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:28:54.049 08:57:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:54.049 08:57:56 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:28:54.049 08:57:56 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:28:54.049 08:57:56 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:28:54.049 08:57:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:28:54.049 08:57:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:54.049 08:57:56 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:28:54.049 08:57:56 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:28:54.049 08:57:56 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:28:54.049 08:57:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:28:54.049 08:57:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:54.049 08:57:56 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:28:54.049 08:57:56 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:28:54.049 08:57:56 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:28:54.049 08:57:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:28:54.049 08:57:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:54.049 08:57:56 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:28:54.049 08:57:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:28:54.049 08:57:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:28:54.049 08:57:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:28:54.049 08:57:56 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:28:54.049 08:57:56 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:28:54.307 No valid GPT data, bailing 00:28:54.307 08:57:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:54.307 08:57:56 -- scripts/common.sh@391 -- # pt= 00:28:54.307 08:57:56 -- scripts/common.sh@392 -- # return 1 00:28:54.307 08:57:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:28:54.307 1+0 records in 00:28:54.307 1+0 records out 00:28:54.307 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128879 s, 81.4 MB/s 00:28:54.307 08:57:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:28:54.307 08:57:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:28:54.307 08:57:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:28:54.307 08:57:56 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:28:54.307 08:57:56 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:28:54.307 No valid GPT data, bailing 00:28:54.307 08:57:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:54.307 08:57:56 -- scripts/common.sh@391 -- # pt= 00:28:54.307 08:57:56 -- scripts/common.sh@392 -- # return 1 00:28:54.307 08:57:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:28:54.307 1+0 records in 00:28:54.307 1+0 records out 00:28:54.307 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00387001 s, 271 MB/s 00:28:54.307 08:57:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:28:54.307 08:57:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:28:54.307 08:57:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:28:54.307 08:57:56 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:28:54.307 08:57:56 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:28:54.307 No valid GPT data, bailing 00:28:54.307 08:57:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:28:54.307 08:57:56 -- scripts/common.sh@391 -- # pt= 00:28:54.307 08:57:56 -- scripts/common.sh@392 -- # return 1 00:28:54.307 08:57:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:28:54.564 1+0 records in 00:28:54.564 1+0 records out 00:28:54.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00525897 s, 199 MB/s 00:28:54.564 08:57:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:28:54.564 08:57:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:28:54.564 08:57:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:28:54.564 08:57:56 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:28:54.564 08:57:56 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:28:54.564 No valid GPT data, bailing 00:28:54.564 08:57:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:28:54.564 08:57:56 -- scripts/common.sh@391 -- # pt= 00:28:54.564 08:57:56 -- scripts/common.sh@392 -- # return 1 00:28:54.564 08:57:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:28:54.564 1+0 records in 00:28:54.564 1+0 records out 00:28:54.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00463542 s, 226 MB/s 00:28:54.564 08:57:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:28:54.564 08:57:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:28:54.564 08:57:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:28:54.564 08:57:56 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:28:54.564 08:57:56 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:28:54.564 No valid GPT data, bailing 00:28:54.564 08:57:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:28:54.564 08:57:56 -- scripts/common.sh@391 -- # pt= 00:28:54.564 08:57:56 -- scripts/common.sh@392 -- # return 1 00:28:54.564 08:57:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:28:54.564 1+0 records in 00:28:54.564 1+0 records out 00:28:54.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00531967 s, 197 MB/s 00:28:54.564 08:57:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:28:54.564 08:57:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:28:54.564 08:57:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:28:54.564 08:57:56 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:28:54.564 08:57:56 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:28:54.564 No valid GPT data, bailing 00:28:54.564 08:57:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:28:54.564 08:57:56 -- scripts/common.sh@391 -- # pt= 00:28:54.564 08:57:56 -- scripts/common.sh@392 -- # return 1 00:28:54.564 08:57:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:28:54.824 1+0 records in 00:28:54.824 1+0 records out 00:28:54.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00579905 s, 181 MB/s 00:28:54.824 08:57:56 -- spdk/autotest.sh@118 -- # sync 00:28:54.824 08:57:56 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:28:54.824 08:57:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:28:54.824 08:57:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:28:56.807 08:57:58 -- spdk/autotest.sh@124 -- # uname -s 00:28:56.807 08:57:58 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:28:56.807 08:57:58 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:28:56.807 08:57:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:56.807 08:57:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:56.807 08:57:58 -- common/autotest_common.sh@10 -- # set +x 00:28:56.807 ************************************ 00:28:56.807 START TEST setup.sh 00:28:56.807 ************************************ 00:28:56.807 08:57:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:28:56.807 * Looking for test storage... 00:28:56.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:28:56.807 08:57:58 -- setup/test-setup.sh@10 -- # uname -s 00:28:56.807 08:57:58 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:28:56.807 08:57:58 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:28:56.807 08:57:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:56.807 08:57:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:56.807 08:57:58 -- common/autotest_common.sh@10 -- # set +x 00:28:56.807 ************************************ 00:28:56.807 START TEST acl 00:28:56.807 ************************************ 00:28:56.807 08:57:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:28:56.807 * Looking for test storage... 00:28:56.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:28:56.807 08:57:58 -- setup/acl.sh@10 -- # get_zoned_devs 00:28:56.807 08:57:58 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:28:56.807 08:57:58 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:28:56.807 08:57:58 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:28:56.807 08:57:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:28:56.807 08:57:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:28:56.807 08:57:58 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:28:56.807 08:57:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:56.807 08:57:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:56.807 08:57:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:28:56.807 08:57:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:28:56.807 08:57:58 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:28:56.807 08:57:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:56.807 08:57:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:56.807 08:57:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:28:56.807 08:57:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:28:56.807 08:57:58 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:28:56.807 08:57:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:28:56.807 08:57:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:56.807 08:57:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:28:56.807 08:57:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:28:56.807 08:57:58 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:28:56.807 08:57:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:28:56.807 08:57:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:56.807 08:57:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:28:56.807 08:57:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:28:56.807 08:57:58 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:28:56.807 08:57:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:28:56.807 08:57:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:56.807 08:57:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:28:56.807 08:57:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:28:56.807 08:57:58 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:28:56.807 08:57:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:28:56.807 08:57:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:56.807 08:57:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:28:56.807 08:57:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:28:56.807 08:57:58 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:28:56.807 08:57:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:28:56.807 08:57:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:56.807 08:57:58 -- setup/acl.sh@12 -- # devs=() 00:28:56.807 08:57:58 -- setup/acl.sh@12 -- # declare -a devs 00:28:56.807 08:57:58 -- setup/acl.sh@13 -- # drivers=() 00:28:56.807 08:57:58 -- setup/acl.sh@13 -- # declare -A drivers 00:28:56.807 08:57:58 -- setup/acl.sh@51 -- # setup reset 00:28:56.807 08:57:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:28:56.807 08:57:58 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:58.211 08:58:00 -- setup/acl.sh@52 -- # collect_setup_devs 00:28:58.211 08:58:00 -- setup/acl.sh@16 -- # local dev driver 00:28:58.211 08:58:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:28:58.211 08:58:00 -- setup/acl.sh@15 -- # setup output status 00:28:58.211 08:58:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:28:58.211 08:58:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:28:58.777 08:58:00 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:28:58.777 08:58:00 -- setup/acl.sh@19 -- # continue 00:28:58.777 08:58:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:28:59.343 Hugepages 00:28:59.343 node hugesize free / total 00:28:59.343 08:58:01 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:28:59.343 08:58:01 -- setup/acl.sh@19 -- # continue 00:28:59.343 08:58:01 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:28:59.343 00:28:59.343 Type BDF Vendor Device NUMA Driver Device Block devices 00:28:59.343 08:58:01 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:28:59.343 08:58:01 -- setup/acl.sh@19 -- # continue 00:28:59.343 08:58:01 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:28:59.343 08:58:01 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:28:59.343 08:58:01 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:28:59.343 08:58:01 -- setup/acl.sh@20 -- # continue 00:28:59.343 08:58:01 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:28:59.343 08:58:01 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:28:59.343 08:58:01 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:28:59.343 08:58:01 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:28:59.343 08:58:01 -- setup/acl.sh@22 -- # devs+=("$dev") 00:28:59.343 08:58:01 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:28:59.343 08:58:01 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:28:59.601 08:58:01 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:28:59.601 08:58:01 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:28:59.601 08:58:01 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:28:59.601 08:58:01 -- setup/acl.sh@22 -- # devs+=("$dev") 00:28:59.601 08:58:01 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:28:59.601 08:58:01 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:28:59.601 08:58:01 -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:28:59.601 08:58:01 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:28:59.601 08:58:01 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:28:59.601 08:58:01 -- setup/acl.sh@22 -- # devs+=("$dev") 00:28:59.601 08:58:01 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:28:59.601 08:58:01 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:28:59.601 08:58:01 -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:28:59.601 08:58:01 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:28:59.601 08:58:01 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:28:59.601 08:58:01 -- setup/acl.sh@22 -- # devs+=("$dev") 00:28:59.601 08:58:01 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:28:59.601 08:58:01 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:28:59.601 08:58:01 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:28:59.601 08:58:01 -- setup/acl.sh@54 -- # run_test denied denied 00:28:59.601 08:58:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:59.601 08:58:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:59.601 08:58:01 -- common/autotest_common.sh@10 -- # set +x 00:28:59.860 ************************************ 00:28:59.860 START TEST denied 00:28:59.860 ************************************ 00:28:59.860 08:58:01 -- common/autotest_common.sh@1111 -- # denied 00:28:59.860 08:58:01 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:28:59.860 08:58:01 -- setup/acl.sh@38 -- # setup output config 00:28:59.860 08:58:01 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:28:59.860 08:58:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:28:59.860 08:58:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:29:01.235 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:29:01.235 08:58:03 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:29:01.235 08:58:03 -- setup/acl.sh@28 -- # local dev driver 00:29:01.235 08:58:03 -- setup/acl.sh@30 -- # for dev in "$@" 00:29:01.235 08:58:03 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:29:01.235 08:58:03 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:29:01.235 08:58:03 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:29:01.235 08:58:03 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:29:01.235 08:58:03 -- setup/acl.sh@41 -- # setup reset 00:29:01.235 08:58:03 -- setup/common.sh@9 -- # [[ reset == output ]] 00:29:01.235 08:58:03 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:07.809 00:29:07.809 real 0m7.634s 00:29:07.809 user 0m0.919s 00:29:07.809 sys 0m1.789s 00:29:07.809 ************************************ 00:29:07.809 END TEST denied 00:29:07.809 ************************************ 00:29:07.809 08:58:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:07.809 08:58:09 -- common/autotest_common.sh@10 -- # set +x 00:29:07.809 08:58:09 -- setup/acl.sh@55 -- # run_test allowed allowed 00:29:07.809 08:58:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:07.809 08:58:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:07.809 08:58:09 -- common/autotest_common.sh@10 -- # set +x 00:29:07.809 ************************************ 00:29:07.809 START TEST allowed 00:29:07.809 ************************************ 00:29:07.809 08:58:09 -- common/autotest_common.sh@1111 -- # allowed 00:29:07.809 08:58:09 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:29:07.809 08:58:09 -- setup/acl.sh@45 -- # setup output config 00:29:07.809 08:58:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:29:07.809 08:58:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:29:07.809 08:58:09 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:29:09.183 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:09.183 08:58:10 -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:29:09.183 08:58:10 -- setup/acl.sh@28 -- # local dev driver 00:29:09.183 08:58:10 -- setup/acl.sh@30 -- # for dev in "$@" 00:29:09.183 08:58:10 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:29:09.183 08:58:10 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:29:09.183 08:58:10 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:29:09.183 08:58:10 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:29:09.183 08:58:10 -- setup/acl.sh@30 -- # for dev in "$@" 00:29:09.183 08:58:10 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:29:09.183 08:58:10 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:29:09.183 08:58:10 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:29:09.183 08:58:10 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:29:09.183 08:58:10 -- setup/acl.sh@30 -- # for dev in "$@" 00:29:09.183 08:58:10 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:29:09.183 08:58:10 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:29:09.183 08:58:10 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:29:09.183 08:58:10 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:29:09.183 08:58:10 -- setup/acl.sh@48 -- # setup reset 00:29:09.183 08:58:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:29:09.183 08:58:10 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:10.167 ************************************ 00:29:10.167 END TEST allowed 00:29:10.167 00:29:10.167 real 0m2.589s 00:29:10.167 user 0m1.085s 00:29:10.167 sys 0m1.491s 00:29:10.167 08:58:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:10.167 08:58:12 -- common/autotest_common.sh@10 -- # set +x 00:29:10.167 ************************************ 00:29:10.167 ************************************ 00:29:10.167 END TEST acl 00:29:10.167 ************************************ 00:29:10.167 00:29:10.167 real 0m13.501s 00:29:10.167 user 0m3.437s 00:29:10.167 sys 0m5.118s 00:29:10.167 08:58:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:10.167 08:58:12 -- common/autotest_common.sh@10 -- # set +x 00:29:10.167 08:58:12 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:29:10.167 08:58:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:10.167 08:58:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:10.167 08:58:12 -- common/autotest_common.sh@10 -- # set +x 00:29:10.427 ************************************ 00:29:10.427 START TEST hugepages 00:29:10.427 ************************************ 00:29:10.427 08:58:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:29:10.427 * Looking for test storage... 00:29:10.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:29:10.427 08:58:12 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:29:10.427 08:58:12 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:29:10.427 08:58:12 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:29:10.427 08:58:12 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:29:10.427 08:58:12 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:29:10.427 08:58:12 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:29:10.427 08:58:12 -- setup/common.sh@17 -- # local get=Hugepagesize 00:29:10.427 08:58:12 -- setup/common.sh@18 -- # local node= 00:29:10.427 08:58:12 -- setup/common.sh@19 -- # local var val 00:29:10.427 08:58:12 -- setup/common.sh@20 -- # local mem_f mem 00:29:10.427 08:58:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:10.427 08:58:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:10.427 08:58:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:10.427 08:58:12 -- setup/common.sh@28 -- # mapfile -t mem 00:29:10.427 08:58:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.427 08:58:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 5861916 kB' 'MemAvailable: 7430140 kB' 'Buffers: 2436 kB' 'Cached: 1775520 kB' 'SwapCached: 0 kB' 'Active: 441576 kB' 'Inactive: 1446692 kB' 'Active(anon): 110124 kB' 'Inactive(anon): 10700 kB' 'Active(file): 331452 kB' 'Inactive(file): 1435992 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 832 kB' 'Writeback: 0 kB' 'AnonPages: 110208 kB' 'Mapped: 49144 kB' 'Shmem: 10512 kB' 'KReclaimable: 75656 kB' 'Slab: 148592 kB' 'SReclaimable: 75656 kB' 'SUnreclaim: 72936 kB' 'KernelStack: 4692 kB' 'PageTables: 3212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12407568 kB' 'Committed_AS: 335888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53216 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.427 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.427 08:58:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # continue 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:29:10.428 08:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:29:10.428 08:58:12 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:29:10.428 08:58:12 -- setup/common.sh@33 -- # echo 2048 00:29:10.428 08:58:12 -- setup/common.sh@33 -- # return 0 00:29:10.428 08:58:12 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:29:10.428 08:58:12 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:29:10.428 08:58:12 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:29:10.428 08:58:12 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:29:10.428 08:58:12 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:29:10.428 08:58:12 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:29:10.428 08:58:12 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:29:10.428 08:58:12 -- setup/hugepages.sh@207 -- # get_nodes 00:29:10.428 08:58:12 -- setup/hugepages.sh@27 -- # local node 00:29:10.428 08:58:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:29:10.428 08:58:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:29:10.428 08:58:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:29:10.428 08:58:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:29:10.688 08:58:12 -- setup/hugepages.sh@208 -- # clear_hp 00:29:10.688 08:58:12 -- setup/hugepages.sh@37 -- # local node hp 00:29:10.688 08:58:12 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:29:10.688 08:58:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:29:10.688 08:58:12 -- setup/hugepages.sh@41 -- # echo 0 00:29:10.688 08:58:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:29:10.688 08:58:12 -- setup/hugepages.sh@41 -- # echo 0 00:29:10.688 08:58:12 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:29:10.688 08:58:12 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:29:10.688 08:58:12 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:29:10.688 08:58:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:10.688 08:58:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:10.688 08:58:12 -- common/autotest_common.sh@10 -- # set +x 00:29:10.688 ************************************ 00:29:10.688 START TEST default_setup 00:29:10.688 ************************************ 00:29:10.688 08:58:12 -- common/autotest_common.sh@1111 -- # default_setup 00:29:10.688 08:58:12 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:29:10.688 08:58:12 -- setup/hugepages.sh@49 -- # local size=2097152 00:29:10.688 08:58:12 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:29:10.688 08:58:12 -- setup/hugepages.sh@51 -- # shift 00:29:10.688 08:58:12 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:29:10.688 08:58:12 -- setup/hugepages.sh@52 -- # local node_ids 00:29:10.688 08:58:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:29:10.688 08:58:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:29:10.688 08:58:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:29:10.688 08:58:12 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:29:10.688 08:58:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:29:10.688 08:58:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:29:10.688 08:58:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:29:10.688 08:58:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:29:10.688 08:58:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:29:10.688 08:58:12 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:29:10.688 08:58:12 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:29:10.688 08:58:12 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:29:10.688 08:58:12 -- setup/hugepages.sh@73 -- # return 0 00:29:10.688 08:58:12 -- setup/hugepages.sh@137 -- # setup output 00:29:10.688 08:58:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:29:10.688 08:58:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:11.255 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:12.194 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:29:12.194 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:12.194 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:12.194 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:29:12.194 08:58:14 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:29:12.194 08:58:14 -- setup/hugepages.sh@89 -- # local node 00:29:12.195 08:58:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:29:12.195 08:58:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:29:12.195 08:58:14 -- setup/hugepages.sh@92 -- # local surp 00:29:12.195 08:58:14 -- setup/hugepages.sh@93 -- # local resv 00:29:12.195 08:58:14 -- setup/hugepages.sh@94 -- # local anon 00:29:12.195 08:58:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:29:12.195 08:58:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:29:12.195 08:58:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:29:12.195 08:58:14 -- setup/common.sh@18 -- # local node= 00:29:12.195 08:58:14 -- setup/common.sh@19 -- # local var val 00:29:12.195 08:58:14 -- setup/common.sh@20 -- # local mem_f mem 00:29:12.195 08:58:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:12.195 08:58:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:12.195 08:58:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:12.195 08:58:14 -- setup/common.sh@28 -- # mapfile -t mem 00:29:12.195 08:58:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:12.195 08:58:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7972392 kB' 'MemAvailable: 9540536 kB' 'Buffers: 2436 kB' 'Cached: 1775528 kB' 'SwapCached: 0 kB' 'Active: 454500 kB' 'Inactive: 1446684 kB' 'Active(anon): 123048 kB' 'Inactive(anon): 10660 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436024 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 996 kB' 'Writeback: 0 kB' 'AnonPages: 123204 kB' 'Mapped: 48844 kB' 'Shmem: 10484 kB' 'KReclaimable: 75432 kB' 'Slab: 148140 kB' 'SReclaimable: 75432 kB' 'SUnreclaim: 72708 kB' 'KernelStack: 4640 kB' 'PageTables: 3428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456144 kB' 'Committed_AS: 350588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53232 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.195 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.195 08:58:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:12.196 08:58:14 -- setup/common.sh@33 -- # echo 0 00:29:12.196 08:58:14 -- setup/common.sh@33 -- # return 0 00:29:12.196 08:58:14 -- setup/hugepages.sh@97 -- # anon=0 00:29:12.196 08:58:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:29:12.196 08:58:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:29:12.196 08:58:14 -- setup/common.sh@18 -- # local node= 00:29:12.196 08:58:14 -- setup/common.sh@19 -- # local var val 00:29:12.196 08:58:14 -- setup/common.sh@20 -- # local mem_f mem 00:29:12.196 08:58:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:12.196 08:58:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:12.196 08:58:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:12.196 08:58:14 -- setup/common.sh@28 -- # mapfile -t mem 00:29:12.196 08:58:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:12.196 08:58:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7972392 kB' 'MemAvailable: 9540536 kB' 'Buffers: 2436 kB' 'Cached: 1775524 kB' 'SwapCached: 0 kB' 'Active: 453756 kB' 'Inactive: 1446688 kB' 'Active(anon): 122304 kB' 'Inactive(anon): 10664 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436024 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 996 kB' 'Writeback: 0 kB' 'AnonPages: 122528 kB' 'Mapped: 48996 kB' 'Shmem: 10480 kB' 'KReclaimable: 75428 kB' 'Slab: 148136 kB' 'SReclaimable: 75428 kB' 'SUnreclaim: 72708 kB' 'KernelStack: 4608 kB' 'PageTables: 3360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456144 kB' 'Committed_AS: 350588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53216 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.196 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.196 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.197 08:58:14 -- setup/common.sh@33 -- # echo 0 00:29:12.197 08:58:14 -- setup/common.sh@33 -- # return 0 00:29:12.197 08:58:14 -- setup/hugepages.sh@99 -- # surp=0 00:29:12.197 08:58:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:29:12.197 08:58:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:29:12.197 08:58:14 -- setup/common.sh@18 -- # local node= 00:29:12.197 08:58:14 -- setup/common.sh@19 -- # local var val 00:29:12.197 08:58:14 -- setup/common.sh@20 -- # local mem_f mem 00:29:12.197 08:58:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:12.197 08:58:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:12.197 08:58:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:12.197 08:58:14 -- setup/common.sh@28 -- # mapfile -t mem 00:29:12.197 08:58:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7972140 kB' 'MemAvailable: 9540284 kB' 'Buffers: 2436 kB' 'Cached: 1775524 kB' 'SwapCached: 0 kB' 'Active: 453516 kB' 'Inactive: 1446688 kB' 'Active(anon): 122064 kB' 'Inactive(anon): 10664 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436024 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 996 kB' 'Writeback: 0 kB' 'AnonPages: 122556 kB' 'Mapped: 48996 kB' 'Shmem: 10480 kB' 'KReclaimable: 75428 kB' 'Slab: 148136 kB' 'SReclaimable: 75428 kB' 'SUnreclaim: 72708 kB' 'KernelStack: 4624 kB' 'PageTables: 3404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456144 kB' 'Committed_AS: 350588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53232 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.197 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.197 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.198 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.198 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.459 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.459 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.459 08:58:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.459 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.459 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.459 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.459 08:58:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.459 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:12.460 08:58:14 -- setup/common.sh@33 -- # echo 0 00:29:12.460 08:58:14 -- setup/common.sh@33 -- # return 0 00:29:12.460 08:58:14 -- setup/hugepages.sh@100 -- # resv=0 00:29:12.460 08:58:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:29:12.460 nr_hugepages=1024 00:29:12.460 08:58:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:29:12.460 resv_hugepages=0 00:29:12.460 08:58:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:29:12.460 surplus_hugepages=0 00:29:12.460 08:58:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:29:12.460 anon_hugepages=0 00:29:12.460 08:58:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:29:12.460 08:58:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:29:12.460 08:58:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:29:12.460 08:58:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:29:12.460 08:58:14 -- setup/common.sh@18 -- # local node= 00:29:12.460 08:58:14 -- setup/common.sh@19 -- # local var val 00:29:12.460 08:58:14 -- setup/common.sh@20 -- # local mem_f mem 00:29:12.460 08:58:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:12.460 08:58:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:12.460 08:58:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:12.460 08:58:14 -- setup/common.sh@28 -- # mapfile -t mem 00:29:12.460 08:58:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7972140 kB' 'MemAvailable: 9540284 kB' 'Buffers: 2436 kB' 'Cached: 1775524 kB' 'SwapCached: 0 kB' 'Active: 453544 kB' 'Inactive: 1446688 kB' 'Active(anon): 122092 kB' 'Inactive(anon): 10664 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436024 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 996 kB' 'Writeback: 0 kB' 'AnonPages: 122584 kB' 'Mapped: 48996 kB' 'Shmem: 10480 kB' 'KReclaimable: 75428 kB' 'Slab: 148136 kB' 'SReclaimable: 75428 kB' 'SUnreclaim: 72708 kB' 'KernelStack: 4640 kB' 'PageTables: 3444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456144 kB' 'Committed_AS: 350588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53216 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.460 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.460 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:12.461 08:58:14 -- setup/common.sh@33 -- # echo 1024 00:29:12.461 08:58:14 -- setup/common.sh@33 -- # return 0 00:29:12.461 08:58:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:29:12.461 08:58:14 -- setup/hugepages.sh@112 -- # get_nodes 00:29:12.461 08:58:14 -- setup/hugepages.sh@27 -- # local node 00:29:12.461 08:58:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:29:12.461 08:58:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:29:12.461 08:58:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:29:12.461 08:58:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:29:12.461 08:58:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:29:12.461 08:58:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:29:12.461 08:58:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:29:12.461 08:58:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:29:12.461 08:58:14 -- setup/common.sh@18 -- # local node=0 00:29:12.461 08:58:14 -- setup/common.sh@19 -- # local var val 00:29:12.461 08:58:14 -- setup/common.sh@20 -- # local mem_f mem 00:29:12.461 08:58:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:12.461 08:58:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:29:12.461 08:58:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:29:12.461 08:58:14 -- setup/common.sh@28 -- # mapfile -t mem 00:29:12.461 08:58:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:12.461 08:58:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7972140 kB' 'MemUsed: 4260096 kB' 'SwapCached: 0 kB' 'Active: 453548 kB' 'Inactive: 1446688 kB' 'Active(anon): 122096 kB' 'Inactive(anon): 10664 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436024 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 996 kB' 'Writeback: 0 kB' 'FilePages: 1777960 kB' 'Mapped: 48996 kB' 'AnonPages: 122584 kB' 'Shmem: 10480 kB' 'KernelStack: 4640 kB' 'PageTables: 3444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75428 kB' 'Slab: 148136 kB' 'SReclaimable: 75428 kB' 'SUnreclaim: 72708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.461 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.461 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # continue 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # IFS=': ' 00:29:12.462 08:58:14 -- setup/common.sh@31 -- # read -r var val _ 00:29:12.462 08:58:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:12.462 08:58:14 -- setup/common.sh@33 -- # echo 0 00:29:12.462 08:58:14 -- setup/common.sh@33 -- # return 0 00:29:12.462 08:58:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:29:12.462 08:58:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:29:12.462 08:58:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:29:12.462 08:58:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:29:12.462 08:58:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:29:12.462 node0=1024 expecting 1024 00:29:12.462 08:58:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:29:12.462 00:29:12.462 real 0m1.764s 00:29:12.462 user 0m0.711s 00:29:12.462 sys 0m0.961s 00:29:12.462 08:58:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:12.462 08:58:14 -- common/autotest_common.sh@10 -- # set +x 00:29:12.462 ************************************ 00:29:12.462 END TEST default_setup 00:29:12.462 ************************************ 00:29:12.462 08:58:14 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:29:12.462 08:58:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:12.462 08:58:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:12.462 08:58:14 -- common/autotest_common.sh@10 -- # set +x 00:29:12.462 ************************************ 00:29:12.462 START TEST per_node_1G_alloc 00:29:12.462 ************************************ 00:29:12.462 08:58:14 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:29:12.462 08:58:14 -- setup/hugepages.sh@143 -- # local IFS=, 00:29:12.462 08:58:14 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:29:12.462 08:58:14 -- setup/hugepages.sh@49 -- # local size=1048576 00:29:12.462 08:58:14 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:29:12.462 08:58:14 -- setup/hugepages.sh@51 -- # shift 00:29:12.462 08:58:14 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:29:12.462 08:58:14 -- setup/hugepages.sh@52 -- # local node_ids 00:29:12.462 08:58:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:29:12.462 08:58:14 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:29:12.462 08:58:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:29:12.462 08:58:14 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:29:12.462 08:58:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:29:12.462 08:58:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:29:12.462 08:58:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:29:12.462 08:58:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:29:12.462 08:58:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:29:12.462 08:58:14 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:29:12.462 08:58:14 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:29:12.462 08:58:14 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:29:12.462 08:58:14 -- setup/hugepages.sh@73 -- # return 0 00:29:12.462 08:58:14 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:29:12.462 08:58:14 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:29:12.462 08:58:14 -- setup/hugepages.sh@146 -- # setup output 00:29:12.462 08:58:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:29:12.462 08:58:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:13.031 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:13.292 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:13.292 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:13.292 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:13.292 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:13.292 08:58:15 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:29:13.292 08:58:15 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:29:13.292 08:58:15 -- setup/hugepages.sh@89 -- # local node 00:29:13.292 08:58:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:29:13.292 08:58:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:29:13.292 08:58:15 -- setup/hugepages.sh@92 -- # local surp 00:29:13.292 08:58:15 -- setup/hugepages.sh@93 -- # local resv 00:29:13.292 08:58:15 -- setup/hugepages.sh@94 -- # local anon 00:29:13.292 08:58:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:29:13.292 08:58:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:29:13.292 08:58:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:29:13.292 08:58:15 -- setup/common.sh@18 -- # local node= 00:29:13.292 08:58:15 -- setup/common.sh@19 -- # local var val 00:29:13.292 08:58:15 -- setup/common.sh@20 -- # local mem_f mem 00:29:13.292 08:58:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:13.292 08:58:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:13.292 08:58:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:13.292 08:58:15 -- setup/common.sh@28 -- # mapfile -t mem 00:29:13.292 08:58:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:13.292 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.292 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.292 08:58:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 9012424 kB' 'MemAvailable: 10580484 kB' 'Buffers: 2436 kB' 'Cached: 1775520 kB' 'SwapCached: 0 kB' 'Active: 454396 kB' 'Inactive: 1446696 kB' 'Active(anon): 122944 kB' 'Inactive(anon): 10664 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436032 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123216 kB' 'Mapped: 49344 kB' 'Shmem: 10472 kB' 'KReclaimable: 75248 kB' 'Slab: 147788 kB' 'SReclaimable: 75248 kB' 'SUnreclaim: 72540 kB' 'KernelStack: 4756 kB' 'PageTables: 3464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980432 kB' 'Committed_AS: 350588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53296 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:13.292 08:58:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.292 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.292 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.292 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.292 08:58:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.292 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.292 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.292 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.292 08:58:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.292 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.292 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.292 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.292 08:58:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.292 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.292 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.292 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.292 08:58:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.292 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.292 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.292 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.292 08:58:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.292 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.292 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.293 08:58:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:13.293 08:58:15 -- setup/common.sh@33 -- # echo 0 00:29:13.293 08:58:15 -- setup/common.sh@33 -- # return 0 00:29:13.293 08:58:15 -- setup/hugepages.sh@97 -- # anon=0 00:29:13.293 08:58:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:29:13.293 08:58:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:29:13.293 08:58:15 -- setup/common.sh@18 -- # local node= 00:29:13.293 08:58:15 -- setup/common.sh@19 -- # local var val 00:29:13.293 08:58:15 -- setup/common.sh@20 -- # local mem_f mem 00:29:13.293 08:58:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:13.293 08:58:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:13.293 08:58:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:13.293 08:58:15 -- setup/common.sh@28 -- # mapfile -t mem 00:29:13.293 08:58:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.293 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 9012424 kB' 'MemAvailable: 10580484 kB' 'Buffers: 2436 kB' 'Cached: 1775520 kB' 'SwapCached: 0 kB' 'Active: 453788 kB' 'Inactive: 1446696 kB' 'Active(anon): 122336 kB' 'Inactive(anon): 10664 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436032 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122676 kB' 'Mapped: 49144 kB' 'Shmem: 10472 kB' 'KReclaimable: 75248 kB' 'Slab: 147808 kB' 'SReclaimable: 75248 kB' 'SUnreclaim: 72560 kB' 'KernelStack: 4584 kB' 'PageTables: 3272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980432 kB' 'Committed_AS: 350588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53264 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.294 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.294 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.295 08:58:15 -- setup/common.sh@33 -- # echo 0 00:29:13.295 08:58:15 -- setup/common.sh@33 -- # return 0 00:29:13.295 08:58:15 -- setup/hugepages.sh@99 -- # surp=0 00:29:13.295 08:58:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:29:13.295 08:58:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:29:13.295 08:58:15 -- setup/common.sh@18 -- # local node= 00:29:13.295 08:58:15 -- setup/common.sh@19 -- # local var val 00:29:13.295 08:58:15 -- setup/common.sh@20 -- # local mem_f mem 00:29:13.295 08:58:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:13.295 08:58:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:13.295 08:58:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:13.295 08:58:15 -- setup/common.sh@28 -- # mapfile -t mem 00:29:13.295 08:58:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 9012424 kB' 'MemAvailable: 10580484 kB' 'Buffers: 2436 kB' 'Cached: 1775520 kB' 'SwapCached: 0 kB' 'Active: 453772 kB' 'Inactive: 1446696 kB' 'Active(anon): 122320 kB' 'Inactive(anon): 10664 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436032 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122608 kB' 'Mapped: 49144 kB' 'Shmem: 10472 kB' 'KReclaimable: 75248 kB' 'Slab: 147808 kB' 'SReclaimable: 75248 kB' 'SUnreclaim: 72560 kB' 'KernelStack: 4584 kB' 'PageTables: 3268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980432 kB' 'Committed_AS: 350588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53264 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.295 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.295 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:13.296 08:58:15 -- setup/common.sh@33 -- # echo 0 00:29:13.296 08:58:15 -- setup/common.sh@33 -- # return 0 00:29:13.296 08:58:15 -- setup/hugepages.sh@100 -- # resv=0 00:29:13.296 08:58:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:29:13.296 nr_hugepages=512 00:29:13.296 08:58:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:29:13.296 resv_hugepages=0 00:29:13.296 08:58:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:29:13.296 surplus_hugepages=0 00:29:13.296 08:58:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:29:13.296 anon_hugepages=0 00:29:13.296 08:58:15 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:29:13.296 08:58:15 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:29:13.296 08:58:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:29:13.296 08:58:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:29:13.296 08:58:15 -- setup/common.sh@18 -- # local node= 00:29:13.296 08:58:15 -- setup/common.sh@19 -- # local var val 00:29:13.296 08:58:15 -- setup/common.sh@20 -- # local mem_f mem 00:29:13.296 08:58:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:13.296 08:58:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:13.296 08:58:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:13.296 08:58:15 -- setup/common.sh@28 -- # mapfile -t mem 00:29:13.296 08:58:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 9012424 kB' 'MemAvailable: 10580484 kB' 'Buffers: 2436 kB' 'Cached: 1775520 kB' 'SwapCached: 0 kB' 'Active: 453832 kB' 'Inactive: 1446696 kB' 'Active(anon): 122380 kB' 'Inactive(anon): 10664 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436032 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122688 kB' 'Mapped: 49144 kB' 'Shmem: 10472 kB' 'KReclaimable: 75248 kB' 'Slab: 147812 kB' 'SReclaimable: 75248 kB' 'SUnreclaim: 72564 kB' 'KernelStack: 4600 kB' 'PageTables: 3312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980432 kB' 'Committed_AS: 350588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53248 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.296 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.296 08:58:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.297 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.297 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.557 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.557 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.557 08:58:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.557 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.557 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.557 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.557 08:58:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.557 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.557 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.557 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.557 08:58:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.557 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:13.558 08:58:15 -- setup/common.sh@33 -- # echo 512 00:29:13.558 08:58:15 -- setup/common.sh@33 -- # return 0 00:29:13.558 08:58:15 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:29:13.558 08:58:15 -- setup/hugepages.sh@112 -- # get_nodes 00:29:13.558 08:58:15 -- setup/hugepages.sh@27 -- # local node 00:29:13.558 08:58:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:29:13.558 08:58:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:29:13.558 08:58:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:29:13.558 08:58:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:29:13.558 08:58:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:29:13.558 08:58:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:29:13.558 08:58:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:29:13.558 08:58:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:29:13.558 08:58:15 -- setup/common.sh@18 -- # local node=0 00:29:13.558 08:58:15 -- setup/common.sh@19 -- # local var val 00:29:13.558 08:58:15 -- setup/common.sh@20 -- # local mem_f mem 00:29:13.558 08:58:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:13.558 08:58:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:29:13.558 08:58:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:29:13.558 08:58:15 -- setup/common.sh@28 -- # mapfile -t mem 00:29:13.558 08:58:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 9012424 kB' 'MemUsed: 3219812 kB' 'SwapCached: 0 kB' 'Active: 453772 kB' 'Inactive: 1446696 kB' 'Active(anon): 122320 kB' 'Inactive(anon): 10664 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436032 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1777956 kB' 'Mapped: 49144 kB' 'AnonPages: 122608 kB' 'Shmem: 10472 kB' 'KernelStack: 4584 kB' 'PageTables: 3268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75248 kB' 'Slab: 147808 kB' 'SReclaimable: 75248 kB' 'SUnreclaim: 72560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.558 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.558 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # continue 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:29:13.559 08:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:29:13.559 08:58:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:13.559 08:58:15 -- setup/common.sh@33 -- # echo 0 00:29:13.559 08:58:15 -- setup/common.sh@33 -- # return 0 00:29:13.559 08:58:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:29:13.559 08:58:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:29:13.559 08:58:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:29:13.559 08:58:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:29:13.559 08:58:15 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:29:13.559 node0=512 expecting 512 00:29:13.559 08:58:15 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:29:13.559 00:29:13.559 real 0m0.931s 00:29:13.559 user 0m0.376s 00:29:13.559 sys 0m0.518s 00:29:13.559 08:58:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:13.559 08:58:15 -- common/autotest_common.sh@10 -- # set +x 00:29:13.559 ************************************ 00:29:13.559 END TEST per_node_1G_alloc 00:29:13.559 ************************************ 00:29:13.559 08:58:15 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:29:13.559 08:58:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:13.559 08:58:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:13.559 08:58:15 -- common/autotest_common.sh@10 -- # set +x 00:29:13.559 ************************************ 00:29:13.559 START TEST even_2G_alloc 00:29:13.559 ************************************ 00:29:13.559 08:58:15 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:29:13.559 08:58:15 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:29:13.559 08:58:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:29:13.559 08:58:15 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:29:13.559 08:58:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:29:13.559 08:58:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:29:13.559 08:58:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:29:13.559 08:58:15 -- setup/hugepages.sh@62 -- # user_nodes=() 00:29:13.559 08:58:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:29:13.559 08:58:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:29:13.559 08:58:15 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:29:13.559 08:58:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:29:13.559 08:58:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:29:13.559 08:58:15 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:29:13.559 08:58:15 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:29:13.559 08:58:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:29:13.559 08:58:15 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:29:13.559 08:58:15 -- setup/hugepages.sh@83 -- # : 0 00:29:13.559 08:58:15 -- setup/hugepages.sh@84 -- # : 0 00:29:13.559 08:58:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:29:13.559 08:58:15 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:29:13.559 08:58:15 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:29:13.559 08:58:15 -- setup/hugepages.sh@153 -- # setup output 00:29:13.559 08:58:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:29:13.559 08:58:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:14.127 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:14.127 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:14.127 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:14.127 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:14.127 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:14.127 08:58:16 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:29:14.127 08:58:16 -- setup/hugepages.sh@89 -- # local node 00:29:14.127 08:58:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:29:14.127 08:58:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:29:14.127 08:58:16 -- setup/hugepages.sh@92 -- # local surp 00:29:14.127 08:58:16 -- setup/hugepages.sh@93 -- # local resv 00:29:14.127 08:58:16 -- setup/hugepages.sh@94 -- # local anon 00:29:14.127 08:58:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:29:14.127 08:58:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:29:14.127 08:58:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:29:14.127 08:58:16 -- setup/common.sh@18 -- # local node= 00:29:14.127 08:58:16 -- setup/common.sh@19 -- # local var val 00:29:14.127 08:58:16 -- setup/common.sh@20 -- # local mem_f mem 00:29:14.127 08:58:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:14.127 08:58:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:14.127 08:58:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:14.127 08:58:16 -- setup/common.sh@28 -- # mapfile -t mem 00:29:14.127 08:58:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:14.127 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.127 08:58:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7968940 kB' 'MemAvailable: 9537000 kB' 'Buffers: 2436 kB' 'Cached: 1775520 kB' 'SwapCached: 0 kB' 'Active: 454256 kB' 'Inactive: 1446700 kB' 'Active(anon): 122804 kB' 'Inactive(anon): 10668 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436032 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 122892 kB' 'Mapped: 49020 kB' 'Shmem: 10472 kB' 'KReclaimable: 75248 kB' 'Slab: 147852 kB' 'SReclaimable: 75248 kB' 'SUnreclaim: 72604 kB' 'KernelStack: 4772 kB' 'PageTables: 3560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456144 kB' 'Committed_AS: 350588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53264 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:14.127 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.127 08:58:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.127 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.127 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.389 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.389 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:14.390 08:58:16 -- setup/common.sh@33 -- # echo 0 00:29:14.390 08:58:16 -- setup/common.sh@33 -- # return 0 00:29:14.390 08:58:16 -- setup/hugepages.sh@97 -- # anon=0 00:29:14.390 08:58:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:29:14.390 08:58:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:29:14.390 08:58:16 -- setup/common.sh@18 -- # local node= 00:29:14.390 08:58:16 -- setup/common.sh@19 -- # local var val 00:29:14.390 08:58:16 -- setup/common.sh@20 -- # local mem_f mem 00:29:14.390 08:58:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:14.390 08:58:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:14.390 08:58:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:14.390 08:58:16 -- setup/common.sh@28 -- # mapfile -t mem 00:29:14.390 08:58:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7968700 kB' 'MemAvailable: 9536760 kB' 'Buffers: 2436 kB' 'Cached: 1775520 kB' 'SwapCached: 0 kB' 'Active: 453948 kB' 'Inactive: 1446700 kB' 'Active(anon): 122496 kB' 'Inactive(anon): 10668 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436032 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 122592 kB' 'Mapped: 49132 kB' 'Shmem: 10472 kB' 'KReclaimable: 75248 kB' 'Slab: 147916 kB' 'SReclaimable: 75248 kB' 'SUnreclaim: 72668 kB' 'KernelStack: 4628 kB' 'PageTables: 3264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456144 kB' 'Committed_AS: 350588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53232 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.390 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.390 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.391 08:58:16 -- setup/common.sh@33 -- # echo 0 00:29:14.391 08:58:16 -- setup/common.sh@33 -- # return 0 00:29:14.391 08:58:16 -- setup/hugepages.sh@99 -- # surp=0 00:29:14.391 08:58:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:29:14.391 08:58:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:29:14.391 08:58:16 -- setup/common.sh@18 -- # local node= 00:29:14.391 08:58:16 -- setup/common.sh@19 -- # local var val 00:29:14.391 08:58:16 -- setup/common.sh@20 -- # local mem_f mem 00:29:14.391 08:58:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:14.391 08:58:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:14.391 08:58:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:14.391 08:58:16 -- setup/common.sh@28 -- # mapfile -t mem 00:29:14.391 08:58:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7968700 kB' 'MemAvailable: 9536760 kB' 'Buffers: 2436 kB' 'Cached: 1775520 kB' 'SwapCached: 0 kB' 'Active: 453972 kB' 'Inactive: 1446700 kB' 'Active(anon): 122520 kB' 'Inactive(anon): 10668 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436032 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 122612 kB' 'Mapped: 49132 kB' 'Shmem: 10472 kB' 'KReclaimable: 75248 kB' 'Slab: 147916 kB' 'SReclaimable: 75248 kB' 'SUnreclaim: 72668 kB' 'KernelStack: 4628 kB' 'PageTables: 3264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456144 kB' 'Committed_AS: 350588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53232 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.391 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.391 08:58:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.392 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.392 08:58:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:14.392 08:58:16 -- setup/common.sh@33 -- # echo 0 00:29:14.392 08:58:16 -- setup/common.sh@33 -- # return 0 00:29:14.393 08:58:16 -- setup/hugepages.sh@100 -- # resv=0 00:29:14.393 08:58:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:29:14.393 nr_hugepages=1024 00:29:14.393 08:58:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:29:14.393 resv_hugepages=0 00:29:14.393 08:58:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:29:14.393 surplus_hugepages=0 00:29:14.393 08:58:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:29:14.393 anon_hugepages=0 00:29:14.393 08:58:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:29:14.393 08:58:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:29:14.393 08:58:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:29:14.393 08:58:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:29:14.393 08:58:16 -- setup/common.sh@18 -- # local node= 00:29:14.393 08:58:16 -- setup/common.sh@19 -- # local var val 00:29:14.393 08:58:16 -- setup/common.sh@20 -- # local mem_f mem 00:29:14.393 08:58:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:14.393 08:58:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:14.393 08:58:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:14.393 08:58:16 -- setup/common.sh@28 -- # mapfile -t mem 00:29:14.393 08:58:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7968700 kB' 'MemAvailable: 9536760 kB' 'Buffers: 2436 kB' 'Cached: 1775520 kB' 'SwapCached: 0 kB' 'Active: 453940 kB' 'Inactive: 1446700 kB' 'Active(anon): 122488 kB' 'Inactive(anon): 10668 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436032 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 122840 kB' 'Mapped: 49132 kB' 'Shmem: 10472 kB' 'KReclaimable: 75248 kB' 'Slab: 147916 kB' 'SReclaimable: 75248 kB' 'SUnreclaim: 72668 kB' 'KernelStack: 4628 kB' 'PageTables: 3264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456144 kB' 'Committed_AS: 350588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53232 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.393 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.393 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:14.394 08:58:16 -- setup/common.sh@33 -- # echo 1024 00:29:14.394 08:58:16 -- setup/common.sh@33 -- # return 0 00:29:14.394 08:58:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:29:14.394 08:58:16 -- setup/hugepages.sh@112 -- # get_nodes 00:29:14.394 08:58:16 -- setup/hugepages.sh@27 -- # local node 00:29:14.394 08:58:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:29:14.394 08:58:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:29:14.394 08:58:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:29:14.394 08:58:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:29:14.394 08:58:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:29:14.394 08:58:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:29:14.394 08:58:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:29:14.394 08:58:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:29:14.394 08:58:16 -- setup/common.sh@18 -- # local node=0 00:29:14.394 08:58:16 -- setup/common.sh@19 -- # local var val 00:29:14.394 08:58:16 -- setup/common.sh@20 -- # local mem_f mem 00:29:14.394 08:58:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:14.394 08:58:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:29:14.394 08:58:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:29:14.394 08:58:16 -- setup/common.sh@28 -- # mapfile -t mem 00:29:14.394 08:58:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7968700 kB' 'MemUsed: 4263536 kB' 'SwapCached: 0 kB' 'Active: 453952 kB' 'Inactive: 1446700 kB' 'Active(anon): 122500 kB' 'Inactive(anon): 10668 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436032 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'FilePages: 1777956 kB' 'Mapped: 49132 kB' 'AnonPages: 122592 kB' 'Shmem: 10472 kB' 'KernelStack: 4628 kB' 'PageTables: 3264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75248 kB' 'Slab: 147912 kB' 'SReclaimable: 75248 kB' 'SUnreclaim: 72664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.394 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.394 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # continue 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:29:14.395 08:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:29:14.395 08:58:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:14.395 08:58:16 -- setup/common.sh@33 -- # echo 0 00:29:14.395 08:58:16 -- setup/common.sh@33 -- # return 0 00:29:14.395 08:58:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:29:14.395 08:58:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:29:14.395 08:58:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:29:14.395 08:58:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:29:14.395 08:58:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:29:14.395 node0=1024 expecting 1024 00:29:14.395 08:58:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:29:14.395 00:29:14.395 real 0m0.893s 00:29:14.395 user 0m0.339s 00:29:14.395 sys 0m0.511s 00:29:14.395 08:58:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:14.395 08:58:16 -- common/autotest_common.sh@10 -- # set +x 00:29:14.395 ************************************ 00:29:14.395 END TEST even_2G_alloc 00:29:14.395 ************************************ 00:29:14.654 08:58:16 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:29:14.654 08:58:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:14.654 08:58:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:14.654 08:58:16 -- common/autotest_common.sh@10 -- # set +x 00:29:14.654 ************************************ 00:29:14.654 START TEST odd_alloc 00:29:14.654 ************************************ 00:29:14.654 08:58:16 -- common/autotest_common.sh@1111 -- # odd_alloc 00:29:14.654 08:58:16 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:29:14.654 08:58:16 -- setup/hugepages.sh@49 -- # local size=2098176 00:29:14.654 08:58:16 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:29:14.654 08:58:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:29:14.654 08:58:16 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:29:14.654 08:58:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:29:14.654 08:58:16 -- setup/hugepages.sh@62 -- # user_nodes=() 00:29:14.654 08:58:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:29:14.654 08:58:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:29:14.654 08:58:16 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:29:14.654 08:58:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:29:14.654 08:58:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:29:14.654 08:58:16 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:29:14.654 08:58:16 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:29:14.654 08:58:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:29:14.654 08:58:16 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:29:14.654 08:58:16 -- setup/hugepages.sh@83 -- # : 0 00:29:14.654 08:58:16 -- setup/hugepages.sh@84 -- # : 0 00:29:14.654 08:58:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:29:14.654 08:58:16 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:29:14.654 08:58:16 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:29:14.654 08:58:16 -- setup/hugepages.sh@160 -- # setup output 00:29:14.654 08:58:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:29:14.654 08:58:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:15.222 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:15.222 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:15.222 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:15.222 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:15.222 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:15.222 08:58:17 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:29:15.222 08:58:17 -- setup/hugepages.sh@89 -- # local node 00:29:15.222 08:58:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:29:15.222 08:58:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:29:15.222 08:58:17 -- setup/hugepages.sh@92 -- # local surp 00:29:15.222 08:58:17 -- setup/hugepages.sh@93 -- # local resv 00:29:15.222 08:58:17 -- setup/hugepages.sh@94 -- # local anon 00:29:15.222 08:58:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:29:15.222 08:58:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:29:15.222 08:58:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:29:15.222 08:58:17 -- setup/common.sh@18 -- # local node= 00:29:15.222 08:58:17 -- setup/common.sh@19 -- # local var val 00:29:15.222 08:58:17 -- setup/common.sh@20 -- # local mem_f mem 00:29:15.222 08:58:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:15.222 08:58:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:15.222 08:58:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:15.222 08:58:17 -- setup/common.sh@28 -- # mapfile -t mem 00:29:15.222 08:58:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:15.222 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7970116 kB' 'MemAvailable: 9538184 kB' 'Buffers: 2436 kB' 'Cached: 1775528 kB' 'SwapCached: 0 kB' 'Active: 454116 kB' 'Inactive: 1446704 kB' 'Active(anon): 122664 kB' 'Inactive(anon): 10664 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436040 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 804 kB' 'Writeback: 0 kB' 'AnonPages: 122888 kB' 'Mapped: 49160 kB' 'Shmem: 10472 kB' 'KReclaimable: 75248 kB' 'Slab: 148060 kB' 'SReclaimable: 75248 kB' 'SUnreclaim: 72812 kB' 'KernelStack: 4592 kB' 'PageTables: 3352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455120 kB' 'Committed_AS: 350588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53264 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.223 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.223 08:58:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:15.223 08:58:17 -- setup/common.sh@33 -- # echo 0 00:29:15.223 08:58:17 -- setup/common.sh@33 -- # return 0 00:29:15.223 08:58:17 -- setup/hugepages.sh@97 -- # anon=0 00:29:15.223 08:58:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:29:15.224 08:58:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:29:15.224 08:58:17 -- setup/common.sh@18 -- # local node= 00:29:15.224 08:58:17 -- setup/common.sh@19 -- # local var val 00:29:15.224 08:58:17 -- setup/common.sh@20 -- # local mem_f mem 00:29:15.224 08:58:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:15.224 08:58:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:15.224 08:58:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:15.224 08:58:17 -- setup/common.sh@28 -- # mapfile -t mem 00:29:15.224 08:58:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:15.224 08:58:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7969864 kB' 'MemAvailable: 9537932 kB' 'Buffers: 2436 kB' 'Cached: 1775528 kB' 'SwapCached: 0 kB' 'Active: 453704 kB' 'Inactive: 1446696 kB' 'Active(anon): 122252 kB' 'Inactive(anon): 10656 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436040 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 804 kB' 'Writeback: 0 kB' 'AnonPages: 122500 kB' 'Mapped: 48972 kB' 'Shmem: 10472 kB' 'KReclaimable: 75248 kB' 'Slab: 147960 kB' 'SReclaimable: 75248 kB' 'SUnreclaim: 72712 kB' 'KernelStack: 4608 kB' 'PageTables: 3372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455120 kB' 'Committed_AS: 350588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53232 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.224 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.224 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.484 08:58:17 -- setup/common.sh@33 -- # echo 0 00:29:15.484 08:58:17 -- setup/common.sh@33 -- # return 0 00:29:15.484 08:58:17 -- setup/hugepages.sh@99 -- # surp=0 00:29:15.484 08:58:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:29:15.484 08:58:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:29:15.484 08:58:17 -- setup/common.sh@18 -- # local node= 00:29:15.484 08:58:17 -- setup/common.sh@19 -- # local var val 00:29:15.484 08:58:17 -- setup/common.sh@20 -- # local mem_f mem 00:29:15.484 08:58:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:15.484 08:58:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:15.484 08:58:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:15.484 08:58:17 -- setup/common.sh@28 -- # mapfile -t mem 00:29:15.484 08:58:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7969864 kB' 'MemAvailable: 9537932 kB' 'Buffers: 2436 kB' 'Cached: 1775528 kB' 'SwapCached: 0 kB' 'Active: 453752 kB' 'Inactive: 1446696 kB' 'Active(anon): 122300 kB' 'Inactive(anon): 10656 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436040 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 804 kB' 'Writeback: 0 kB' 'AnonPages: 122548 kB' 'Mapped: 48972 kB' 'Shmem: 10472 kB' 'KReclaimable: 75248 kB' 'Slab: 147956 kB' 'SReclaimable: 75248 kB' 'SUnreclaim: 72708 kB' 'KernelStack: 4592 kB' 'PageTables: 3332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455120 kB' 'Committed_AS: 350588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53232 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.484 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.484 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.485 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.485 08:58:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:15.486 08:58:17 -- setup/common.sh@33 -- # echo 0 00:29:15.486 08:58:17 -- setup/common.sh@33 -- # return 0 00:29:15.486 08:58:17 -- setup/hugepages.sh@100 -- # resv=0 00:29:15.486 08:58:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:29:15.486 nr_hugepages=1025 00:29:15.486 08:58:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:29:15.486 resv_hugepages=0 00:29:15.486 surplus_hugepages=0 00:29:15.486 anon_hugepages=0 00:29:15.486 08:58:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:29:15.486 08:58:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:29:15.486 08:58:17 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:29:15.486 08:58:17 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:29:15.486 08:58:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:29:15.486 08:58:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:29:15.486 08:58:17 -- setup/common.sh@18 -- # local node= 00:29:15.486 08:58:17 -- setup/common.sh@19 -- # local var val 00:29:15.486 08:58:17 -- setup/common.sh@20 -- # local mem_f mem 00:29:15.486 08:58:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:15.486 08:58:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:15.486 08:58:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:15.486 08:58:17 -- setup/common.sh@28 -- # mapfile -t mem 00:29:15.486 08:58:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7969864 kB' 'MemAvailable: 9537932 kB' 'Buffers: 2436 kB' 'Cached: 1775528 kB' 'SwapCached: 0 kB' 'Active: 453588 kB' 'Inactive: 1446696 kB' 'Active(anon): 122136 kB' 'Inactive(anon): 10656 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436040 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 804 kB' 'Writeback: 0 kB' 'AnonPages: 122388 kB' 'Mapped: 48972 kB' 'Shmem: 10472 kB' 'KReclaimable: 75248 kB' 'Slab: 147956 kB' 'SReclaimable: 75248 kB' 'SUnreclaim: 72708 kB' 'KernelStack: 4592 kB' 'PageTables: 3332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455120 kB' 'Committed_AS: 350588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53232 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.486 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.486 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:15.487 08:58:17 -- setup/common.sh@33 -- # echo 1025 00:29:15.487 08:58:17 -- setup/common.sh@33 -- # return 0 00:29:15.487 08:58:17 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:29:15.487 08:58:17 -- setup/hugepages.sh@112 -- # get_nodes 00:29:15.487 08:58:17 -- setup/hugepages.sh@27 -- # local node 00:29:15.487 08:58:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:29:15.487 08:58:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:29:15.487 08:58:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:29:15.487 08:58:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:29:15.487 08:58:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:29:15.487 08:58:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:29:15.487 08:58:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:29:15.487 08:58:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:29:15.487 08:58:17 -- setup/common.sh@18 -- # local node=0 00:29:15.487 08:58:17 -- setup/common.sh@19 -- # local var val 00:29:15.487 08:58:17 -- setup/common.sh@20 -- # local mem_f mem 00:29:15.487 08:58:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:15.487 08:58:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:29:15.487 08:58:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:29:15.487 08:58:17 -- setup/common.sh@28 -- # mapfile -t mem 00:29:15.487 08:58:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7969864 kB' 'MemUsed: 4262372 kB' 'SwapCached: 0 kB' 'Active: 453668 kB' 'Inactive: 1446696 kB' 'Active(anon): 122216 kB' 'Inactive(anon): 10656 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436040 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 804 kB' 'Writeback: 0 kB' 'FilePages: 1777964 kB' 'Mapped: 48972 kB' 'AnonPages: 122500 kB' 'Shmem: 10472 kB' 'KernelStack: 4608 kB' 'PageTables: 3376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75248 kB' 'Slab: 147956 kB' 'SReclaimable: 75248 kB' 'SUnreclaim: 72708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.487 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.487 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # continue 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # IFS=': ' 00:29:15.488 08:58:17 -- setup/common.sh@31 -- # read -r var val _ 00:29:15.488 08:58:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:15.488 08:58:17 -- setup/common.sh@33 -- # echo 0 00:29:15.488 08:58:17 -- setup/common.sh@33 -- # return 0 00:29:15.488 08:58:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:29:15.488 08:58:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:29:15.488 08:58:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:29:15.488 08:58:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:29:15.488 08:58:17 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:29:15.488 node0=1025 expecting 1025 00:29:15.488 08:58:17 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:29:15.488 00:29:15.488 real 0m0.905s 00:29:15.488 user 0m0.366s 00:29:15.488 sys 0m0.485s 00:29:15.488 08:58:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:15.488 08:58:17 -- common/autotest_common.sh@10 -- # set +x 00:29:15.488 ************************************ 00:29:15.488 END TEST odd_alloc 00:29:15.488 ************************************ 00:29:15.488 08:58:17 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:29:15.488 08:58:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:15.488 08:58:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:15.488 08:58:17 -- common/autotest_common.sh@10 -- # set +x 00:29:15.746 ************************************ 00:29:15.746 START TEST custom_alloc 00:29:15.746 ************************************ 00:29:15.746 08:58:17 -- common/autotest_common.sh@1111 -- # custom_alloc 00:29:15.746 08:58:17 -- setup/hugepages.sh@167 -- # local IFS=, 00:29:15.746 08:58:17 -- setup/hugepages.sh@169 -- # local node 00:29:15.746 08:58:17 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:29:15.746 08:58:17 -- setup/hugepages.sh@170 -- # local nodes_hp 00:29:15.746 08:58:17 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:29:15.746 08:58:17 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:29:15.746 08:58:17 -- setup/hugepages.sh@49 -- # local size=1048576 00:29:15.746 08:58:17 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:29:15.746 08:58:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:29:15.746 08:58:17 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:29:15.746 08:58:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:29:15.746 08:58:17 -- setup/hugepages.sh@62 -- # user_nodes=() 00:29:15.746 08:58:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:29:15.746 08:58:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:29:15.746 08:58:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:29:15.746 08:58:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:29:15.746 08:58:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:29:15.746 08:58:17 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:29:15.746 08:58:17 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:29:15.746 08:58:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:29:15.746 08:58:17 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:29:15.746 08:58:17 -- setup/hugepages.sh@83 -- # : 0 00:29:15.746 08:58:17 -- setup/hugepages.sh@84 -- # : 0 00:29:15.746 08:58:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:29:15.746 08:58:17 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:29:15.746 08:58:17 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:29:15.746 08:58:17 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:29:15.746 08:58:17 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:29:15.747 08:58:17 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:29:15.747 08:58:17 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:29:15.747 08:58:17 -- setup/hugepages.sh@62 -- # user_nodes=() 00:29:15.747 08:58:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:29:15.747 08:58:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:29:15.747 08:58:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:29:15.747 08:58:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:29:15.747 08:58:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:29:15.747 08:58:17 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:29:15.747 08:58:17 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:29:15.747 08:58:17 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:29:15.747 08:58:17 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:29:15.747 08:58:17 -- setup/hugepages.sh@78 -- # return 0 00:29:15.747 08:58:17 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:29:15.747 08:58:17 -- setup/hugepages.sh@187 -- # setup output 00:29:15.747 08:58:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:29:15.747 08:58:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:16.054 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:16.314 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:16.314 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:16.314 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:16.314 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:16.314 08:58:18 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:29:16.314 08:58:18 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:29:16.314 08:58:18 -- setup/hugepages.sh@89 -- # local node 00:29:16.314 08:58:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:29:16.314 08:58:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:29:16.314 08:58:18 -- setup/hugepages.sh@92 -- # local surp 00:29:16.314 08:58:18 -- setup/hugepages.sh@93 -- # local resv 00:29:16.314 08:58:18 -- setup/hugepages.sh@94 -- # local anon 00:29:16.314 08:58:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:29:16.314 08:58:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:29:16.314 08:58:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:29:16.314 08:58:18 -- setup/common.sh@18 -- # local node= 00:29:16.314 08:58:18 -- setup/common.sh@19 -- # local var val 00:29:16.314 08:58:18 -- setup/common.sh@20 -- # local mem_f mem 00:29:16.314 08:58:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:16.314 08:58:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:16.314 08:58:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:16.314 08:58:18 -- setup/common.sh@28 -- # mapfile -t mem 00:29:16.314 08:58:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.314 08:58:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 9027232 kB' 'MemAvailable: 10595296 kB' 'Buffers: 2436 kB' 'Cached: 1775528 kB' 'SwapCached: 0 kB' 'Active: 451232 kB' 'Inactive: 1446704 kB' 'Active(anon): 119780 kB' 'Inactive(anon): 10664 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436040 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 972 kB' 'Writeback: 0 kB' 'AnonPages: 120024 kB' 'Mapped: 48228 kB' 'Shmem: 10472 kB' 'KReclaimable: 75240 kB' 'Slab: 147900 kB' 'SReclaimable: 75240 kB' 'SUnreclaim: 72660 kB' 'KernelStack: 4524 kB' 'PageTables: 2900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980432 kB' 'Committed_AS: 340092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53200 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.314 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.314 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:16.315 08:58:18 -- setup/common.sh@33 -- # echo 0 00:29:16.315 08:58:18 -- setup/common.sh@33 -- # return 0 00:29:16.315 08:58:18 -- setup/hugepages.sh@97 -- # anon=0 00:29:16.315 08:58:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:29:16.315 08:58:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:29:16.315 08:58:18 -- setup/common.sh@18 -- # local node= 00:29:16.315 08:58:18 -- setup/common.sh@19 -- # local var val 00:29:16.315 08:58:18 -- setup/common.sh@20 -- # local mem_f mem 00:29:16.315 08:58:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:16.315 08:58:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:16.315 08:58:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:16.315 08:58:18 -- setup/common.sh@28 -- # mapfile -t mem 00:29:16.315 08:58:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 9026728 kB' 'MemAvailable: 10594792 kB' 'Buffers: 2436 kB' 'Cached: 1775528 kB' 'SwapCached: 0 kB' 'Active: 450992 kB' 'Inactive: 1446696 kB' 'Active(anon): 119540 kB' 'Inactive(anon): 10656 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436040 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 972 kB' 'Writeback: 0 kB' 'AnonPages: 120024 kB' 'Mapped: 48048 kB' 'Shmem: 10472 kB' 'KReclaimable: 75240 kB' 'Slab: 147888 kB' 'SReclaimable: 75240 kB' 'SUnreclaim: 72648 kB' 'KernelStack: 4512 kB' 'PageTables: 3064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980432 kB' 'Committed_AS: 340092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53184 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.315 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.315 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.316 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.316 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.317 08:58:18 -- setup/common.sh@33 -- # echo 0 00:29:16.317 08:58:18 -- setup/common.sh@33 -- # return 0 00:29:16.317 08:58:18 -- setup/hugepages.sh@99 -- # surp=0 00:29:16.317 08:58:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:29:16.317 08:58:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:29:16.317 08:58:18 -- setup/common.sh@18 -- # local node= 00:29:16.317 08:58:18 -- setup/common.sh@19 -- # local var val 00:29:16.317 08:58:18 -- setup/common.sh@20 -- # local mem_f mem 00:29:16.317 08:58:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:16.317 08:58:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:16.317 08:58:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:16.317 08:58:18 -- setup/common.sh@28 -- # mapfile -t mem 00:29:16.317 08:58:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 9026728 kB' 'MemAvailable: 10594792 kB' 'Buffers: 2436 kB' 'Cached: 1775528 kB' 'SwapCached: 0 kB' 'Active: 450992 kB' 'Inactive: 1446696 kB' 'Active(anon): 119540 kB' 'Inactive(anon): 10656 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436040 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 972 kB' 'Writeback: 0 kB' 'AnonPages: 120024 kB' 'Mapped: 48048 kB' 'Shmem: 10472 kB' 'KReclaimable: 75240 kB' 'Slab: 147888 kB' 'SReclaimable: 75240 kB' 'SUnreclaim: 72648 kB' 'KernelStack: 4580 kB' 'PageTables: 3064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980432 kB' 'Committed_AS: 340092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53168 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.317 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.317 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.579 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.579 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:16.580 08:58:18 -- setup/common.sh@33 -- # echo 0 00:29:16.580 08:58:18 -- setup/common.sh@33 -- # return 0 00:29:16.580 08:58:18 -- setup/hugepages.sh@100 -- # resv=0 00:29:16.580 08:58:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:29:16.580 nr_hugepages=512 00:29:16.580 08:58:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:29:16.580 resv_hugepages=0 00:29:16.580 08:58:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:29:16.580 surplus_hugepages=0 00:29:16.580 08:58:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:29:16.580 anon_hugepages=0 00:29:16.580 08:58:18 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:29:16.580 08:58:18 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:29:16.580 08:58:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:29:16.580 08:58:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:29:16.580 08:58:18 -- setup/common.sh@18 -- # local node= 00:29:16.580 08:58:18 -- setup/common.sh@19 -- # local var val 00:29:16.580 08:58:18 -- setup/common.sh@20 -- # local mem_f mem 00:29:16.580 08:58:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:16.580 08:58:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:16.580 08:58:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:16.580 08:58:18 -- setup/common.sh@28 -- # mapfile -t mem 00:29:16.580 08:58:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 9026728 kB' 'MemAvailable: 10594792 kB' 'Buffers: 2436 kB' 'Cached: 1775528 kB' 'SwapCached: 0 kB' 'Active: 450948 kB' 'Inactive: 1446696 kB' 'Active(anon): 119496 kB' 'Inactive(anon): 10656 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436040 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 972 kB' 'Writeback: 0 kB' 'AnonPages: 119980 kB' 'Mapped: 48048 kB' 'Shmem: 10472 kB' 'KReclaimable: 75240 kB' 'Slab: 147888 kB' 'SReclaimable: 75240 kB' 'SUnreclaim: 72648 kB' 'KernelStack: 4564 kB' 'PageTables: 3024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980432 kB' 'Committed_AS: 340092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53168 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.580 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.580 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:16.581 08:58:18 -- setup/common.sh@33 -- # echo 512 00:29:16.581 08:58:18 -- setup/common.sh@33 -- # return 0 00:29:16.581 08:58:18 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:29:16.581 08:58:18 -- setup/hugepages.sh@112 -- # get_nodes 00:29:16.581 08:58:18 -- setup/hugepages.sh@27 -- # local node 00:29:16.581 08:58:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:29:16.581 08:58:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:29:16.581 08:58:18 -- setup/hugepages.sh@32 -- # no_nodes=1 00:29:16.581 08:58:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:29:16.581 08:58:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:29:16.581 08:58:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:29:16.581 08:58:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:29:16.581 08:58:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:29:16.581 08:58:18 -- setup/common.sh@18 -- # local node=0 00:29:16.581 08:58:18 -- setup/common.sh@19 -- # local var val 00:29:16.581 08:58:18 -- setup/common.sh@20 -- # local mem_f mem 00:29:16.581 08:58:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:16.581 08:58:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:29:16.581 08:58:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:29:16.581 08:58:18 -- setup/common.sh@28 -- # mapfile -t mem 00:29:16.581 08:58:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 9026728 kB' 'MemUsed: 3205508 kB' 'SwapCached: 0 kB' 'Active: 450944 kB' 'Inactive: 1446696 kB' 'Active(anon): 119492 kB' 'Inactive(anon): 10656 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436040 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 972 kB' 'Writeback: 0 kB' 'FilePages: 1777964 kB' 'Mapped: 48048 kB' 'AnonPages: 119976 kB' 'Shmem: 10472 kB' 'KernelStack: 4564 kB' 'PageTables: 3284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75240 kB' 'Slab: 147888 kB' 'SReclaimable: 75240 kB' 'SUnreclaim: 72648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.581 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.581 08:58:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # continue 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:29:16.582 08:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:29:16.582 08:58:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:16.582 08:58:18 -- setup/common.sh@33 -- # echo 0 00:29:16.582 08:58:18 -- setup/common.sh@33 -- # return 0 00:29:16.582 08:58:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:29:16.582 08:58:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:29:16.582 08:58:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:29:16.582 08:58:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:29:16.582 08:58:18 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:29:16.582 node0=512 expecting 512 00:29:16.582 08:58:18 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:29:16.582 00:29:16.582 real 0m0.883s 00:29:16.582 user 0m0.358s 00:29:16.582 sys 0m0.483s 00:29:16.582 08:58:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:16.582 08:58:18 -- common/autotest_common.sh@10 -- # set +x 00:29:16.582 ************************************ 00:29:16.582 END TEST custom_alloc 00:29:16.582 ************************************ 00:29:16.582 08:58:18 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:29:16.582 08:58:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:16.582 08:58:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:16.582 08:58:18 -- common/autotest_common.sh@10 -- # set +x 00:29:16.582 ************************************ 00:29:16.582 START TEST no_shrink_alloc 00:29:16.582 ************************************ 00:29:16.582 08:58:18 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:29:16.582 08:58:18 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:29:16.840 08:58:18 -- setup/hugepages.sh@49 -- # local size=2097152 00:29:16.840 08:58:18 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:29:16.840 08:58:18 -- setup/hugepages.sh@51 -- # shift 00:29:16.840 08:58:18 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:29:16.840 08:58:18 -- setup/hugepages.sh@52 -- # local node_ids 00:29:16.840 08:58:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:29:16.840 08:58:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:29:16.840 08:58:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:29:16.840 08:58:18 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:29:16.840 08:58:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:29:16.840 08:58:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:29:16.840 08:58:18 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:29:16.841 08:58:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:29:16.841 08:58:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:29:16.841 08:58:18 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:29:16.841 08:58:18 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:29:16.841 08:58:18 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:29:16.841 08:58:18 -- setup/hugepages.sh@73 -- # return 0 00:29:16.841 08:58:18 -- setup/hugepages.sh@198 -- # setup output 00:29:16.841 08:58:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:29:16.841 08:58:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:17.098 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:17.358 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:17.358 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:17.358 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:17.358 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:17.358 08:58:19 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:29:17.358 08:58:19 -- setup/hugepages.sh@89 -- # local node 00:29:17.358 08:58:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:29:17.358 08:58:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:29:17.358 08:58:19 -- setup/hugepages.sh@92 -- # local surp 00:29:17.358 08:58:19 -- setup/hugepages.sh@93 -- # local resv 00:29:17.358 08:58:19 -- setup/hugepages.sh@94 -- # local anon 00:29:17.358 08:58:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:29:17.358 08:58:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:29:17.358 08:58:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:29:17.358 08:58:19 -- setup/common.sh@18 -- # local node= 00:29:17.358 08:58:19 -- setup/common.sh@19 -- # local var val 00:29:17.358 08:58:19 -- setup/common.sh@20 -- # local mem_f mem 00:29:17.359 08:58:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:17.359 08:58:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:17.359 08:58:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:17.359 08:58:19 -- setup/common.sh@28 -- # mapfile -t mem 00:29:17.359 08:58:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7972280 kB' 'MemAvailable: 9540348 kB' 'Buffers: 2436 kB' 'Cached: 1775532 kB' 'SwapCached: 0 kB' 'Active: 452296 kB' 'Inactive: 1446708 kB' 'Active(anon): 120844 kB' 'Inactive(anon): 10664 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1128 kB' 'Writeback: 0 kB' 'AnonPages: 120852 kB' 'Mapped: 48096 kB' 'Shmem: 10472 kB' 'KReclaimable: 75240 kB' 'Slab: 147820 kB' 'SReclaimable: 75240 kB' 'SUnreclaim: 72580 kB' 'KernelStack: 4688 kB' 'PageTables: 3320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456144 kB' 'Committed_AS: 339604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53200 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.359 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.359 08:58:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:17.360 08:58:19 -- setup/common.sh@33 -- # echo 0 00:29:17.360 08:58:19 -- setup/common.sh@33 -- # return 0 00:29:17.360 08:58:19 -- setup/hugepages.sh@97 -- # anon=0 00:29:17.360 08:58:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:29:17.360 08:58:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:29:17.360 08:58:19 -- setup/common.sh@18 -- # local node= 00:29:17.360 08:58:19 -- setup/common.sh@19 -- # local var val 00:29:17.360 08:58:19 -- setup/common.sh@20 -- # local mem_f mem 00:29:17.360 08:58:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:17.360 08:58:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:17.360 08:58:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:17.360 08:58:19 -- setup/common.sh@28 -- # mapfile -t mem 00:29:17.360 08:58:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7972280 kB' 'MemAvailable: 9540348 kB' 'Buffers: 2436 kB' 'Cached: 1775532 kB' 'SwapCached: 0 kB' 'Active: 450972 kB' 'Inactive: 1446696 kB' 'Active(anon): 119520 kB' 'Inactive(anon): 10652 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1128 kB' 'Writeback: 0 kB' 'AnonPages: 120028 kB' 'Mapped: 47936 kB' 'Shmem: 10472 kB' 'KReclaimable: 75240 kB' 'Slab: 147800 kB' 'SReclaimable: 75240 kB' 'SUnreclaim: 72560 kB' 'KernelStack: 4536 kB' 'PageTables: 3016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456144 kB' 'Committed_AS: 339604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53184 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.360 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.360 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.361 08:58:19 -- setup/common.sh@33 -- # echo 0 00:29:17.361 08:58:19 -- setup/common.sh@33 -- # return 0 00:29:17.361 08:58:19 -- setup/hugepages.sh@99 -- # surp=0 00:29:17.361 08:58:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:29:17.361 08:58:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:29:17.361 08:58:19 -- setup/common.sh@18 -- # local node= 00:29:17.361 08:58:19 -- setup/common.sh@19 -- # local var val 00:29:17.361 08:58:19 -- setup/common.sh@20 -- # local mem_f mem 00:29:17.361 08:58:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:17.361 08:58:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:17.361 08:58:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:17.361 08:58:19 -- setup/common.sh@28 -- # mapfile -t mem 00:29:17.361 08:58:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7972028 kB' 'MemAvailable: 9540096 kB' 'Buffers: 2436 kB' 'Cached: 1775532 kB' 'SwapCached: 0 kB' 'Active: 450808 kB' 'Inactive: 1446696 kB' 'Active(anon): 119356 kB' 'Inactive(anon): 10652 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1128 kB' 'Writeback: 0 kB' 'AnonPages: 119856 kB' 'Mapped: 47928 kB' 'Shmem: 10472 kB' 'KReclaimable: 75240 kB' 'Slab: 147816 kB' 'SReclaimable: 75240 kB' 'SUnreclaim: 72576 kB' 'KernelStack: 4560 kB' 'PageTables: 3108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456144 kB' 'Committed_AS: 339604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53184 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.361 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.361 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.362 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.362 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.622 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.622 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.622 08:58:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.622 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.622 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.622 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.622 08:58:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.622 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.622 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.622 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.622 08:58:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:17.623 08:58:19 -- setup/common.sh@33 -- # echo 0 00:29:17.623 08:58:19 -- setup/common.sh@33 -- # return 0 00:29:17.623 08:58:19 -- setup/hugepages.sh@100 -- # resv=0 00:29:17.623 08:58:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:29:17.623 nr_hugepages=1024 00:29:17.623 08:58:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:29:17.623 resv_hugepages=0 00:29:17.623 08:58:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:29:17.623 surplus_hugepages=0 00:29:17.623 08:58:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:29:17.623 anon_hugepages=0 00:29:17.623 08:58:19 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:29:17.623 08:58:19 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:29:17.623 08:58:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:29:17.623 08:58:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:29:17.623 08:58:19 -- setup/common.sh@18 -- # local node= 00:29:17.623 08:58:19 -- setup/common.sh@19 -- # local var val 00:29:17.623 08:58:19 -- setup/common.sh@20 -- # local mem_f mem 00:29:17.623 08:58:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:17.623 08:58:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:17.623 08:58:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:17.623 08:58:19 -- setup/common.sh@28 -- # mapfile -t mem 00:29:17.623 08:58:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7972028 kB' 'MemAvailable: 9540096 kB' 'Buffers: 2436 kB' 'Cached: 1775532 kB' 'SwapCached: 0 kB' 'Active: 450788 kB' 'Inactive: 1446696 kB' 'Active(anon): 119336 kB' 'Inactive(anon): 10652 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1128 kB' 'Writeback: 0 kB' 'AnonPages: 119840 kB' 'Mapped: 47928 kB' 'Shmem: 10472 kB' 'KReclaimable: 75240 kB' 'Slab: 147816 kB' 'SReclaimable: 75240 kB' 'SUnreclaim: 72576 kB' 'KernelStack: 4560 kB' 'PageTables: 3108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456144 kB' 'Committed_AS: 339604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53184 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.623 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.623 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.624 08:58:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:17.624 08:58:19 -- setup/common.sh@33 -- # echo 1024 00:29:17.624 08:58:19 -- setup/common.sh@33 -- # return 0 00:29:17.624 08:58:19 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:29:17.624 08:58:19 -- setup/hugepages.sh@112 -- # get_nodes 00:29:17.624 08:58:19 -- setup/hugepages.sh@27 -- # local node 00:29:17.624 08:58:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:29:17.624 08:58:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:29:17.624 08:58:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:29:17.624 08:58:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:29:17.624 08:58:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:29:17.624 08:58:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:29:17.624 08:58:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:29:17.624 08:58:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:29:17.624 08:58:19 -- setup/common.sh@18 -- # local node=0 00:29:17.624 08:58:19 -- setup/common.sh@19 -- # local var val 00:29:17.624 08:58:19 -- setup/common.sh@20 -- # local mem_f mem 00:29:17.624 08:58:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:17.624 08:58:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:29:17.624 08:58:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:29:17.624 08:58:19 -- setup/common.sh@28 -- # mapfile -t mem 00:29:17.624 08:58:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:17.624 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7972028 kB' 'MemUsed: 4260208 kB' 'SwapCached: 0 kB' 'Active: 450804 kB' 'Inactive: 1446696 kB' 'Active(anon): 119352 kB' 'Inactive(anon): 10652 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 1128 kB' 'Writeback: 0 kB' 'FilePages: 1777968 kB' 'Mapped: 47928 kB' 'AnonPages: 119848 kB' 'Shmem: 10472 kB' 'KernelStack: 4560 kB' 'PageTables: 3108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75240 kB' 'Slab: 147816 kB' 'SReclaimable: 75240 kB' 'SUnreclaim: 72576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # continue 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:29:17.625 08:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:29:17.625 08:58:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:17.625 08:58:19 -- setup/common.sh@33 -- # echo 0 00:29:17.625 08:58:19 -- setup/common.sh@33 -- # return 0 00:29:17.625 08:58:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:29:17.625 08:58:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:29:17.625 08:58:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:29:17.625 08:58:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:29:17.626 08:58:19 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:29:17.626 node0=1024 expecting 1024 00:29:17.626 08:58:19 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:29:17.626 08:58:19 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:29:17.626 08:58:19 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:29:17.626 08:58:19 -- setup/hugepages.sh@202 -- # setup output 00:29:17.626 08:58:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:29:17.626 08:58:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:18.193 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:18.193 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:18.193 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:18.193 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:18.193 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:18.193 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:29:18.193 08:58:20 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:29:18.193 08:58:20 -- setup/hugepages.sh@89 -- # local node 00:29:18.193 08:58:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:29:18.193 08:58:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:29:18.193 08:58:20 -- setup/hugepages.sh@92 -- # local surp 00:29:18.193 08:58:20 -- setup/hugepages.sh@93 -- # local resv 00:29:18.193 08:58:20 -- setup/hugepages.sh@94 -- # local anon 00:29:18.193 08:58:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:29:18.193 08:58:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:29:18.193 08:58:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:29:18.193 08:58:20 -- setup/common.sh@18 -- # local node= 00:29:18.193 08:58:20 -- setup/common.sh@19 -- # local var val 00:29:18.193 08:58:20 -- setup/common.sh@20 -- # local mem_f mem 00:29:18.193 08:58:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:18.193 08:58:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:18.193 08:58:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:18.193 08:58:20 -- setup/common.sh@28 -- # mapfile -t mem 00:29:18.193 08:58:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:18.193 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.193 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.193 08:58:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7967496 kB' 'MemAvailable: 9535564 kB' 'Buffers: 2436 kB' 'Cached: 1775532 kB' 'SwapCached: 0 kB' 'Active: 451996 kB' 'Inactive: 1446708 kB' 'Active(anon): 120544 kB' 'Inactive(anon): 10664 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1124 kB' 'Writeback: 0 kB' 'AnonPages: 120580 kB' 'Mapped: 48648 kB' 'Shmem: 10472 kB' 'KReclaimable: 75240 kB' 'Slab: 147816 kB' 'SReclaimable: 75240 kB' 'SUnreclaim: 72576 kB' 'KernelStack: 4776 kB' 'PageTables: 3736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456144 kB' 'Committed_AS: 339604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53232 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:18.193 08:58:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.193 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.193 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.193 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.193 08:58:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.193 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.193 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.193 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.193 08:58:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.193 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.193 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.193 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.194 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.194 08:58:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:29:18.194 08:58:20 -- setup/common.sh@33 -- # echo 0 00:29:18.194 08:58:20 -- setup/common.sh@33 -- # return 0 00:29:18.194 08:58:20 -- setup/hugepages.sh@97 -- # anon=0 00:29:18.455 08:58:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:29:18.455 08:58:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:29:18.455 08:58:20 -- setup/common.sh@18 -- # local node= 00:29:18.455 08:58:20 -- setup/common.sh@19 -- # local var val 00:29:18.455 08:58:20 -- setup/common.sh@20 -- # local mem_f mem 00:29:18.455 08:58:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:18.455 08:58:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:18.455 08:58:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:18.456 08:58:20 -- setup/common.sh@28 -- # mapfile -t mem 00:29:18.456 08:58:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7967244 kB' 'MemAvailable: 9535312 kB' 'Buffers: 2436 kB' 'Cached: 1775532 kB' 'SwapCached: 0 kB' 'Active: 451548 kB' 'Inactive: 1446708 kB' 'Active(anon): 120096 kB' 'Inactive(anon): 10664 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1124 kB' 'Writeback: 0 kB' 'AnonPages: 119912 kB' 'Mapped: 48168 kB' 'Shmem: 10472 kB' 'KReclaimable: 75240 kB' 'Slab: 147816 kB' 'SReclaimable: 75240 kB' 'SUnreclaim: 72576 kB' 'KernelStack: 4572 kB' 'PageTables: 3192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456144 kB' 'Committed_AS: 341940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53200 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.456 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.456 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.457 08:58:20 -- setup/common.sh@33 -- # echo 0 00:29:18.457 08:58:20 -- setup/common.sh@33 -- # return 0 00:29:18.457 08:58:20 -- setup/hugepages.sh@99 -- # surp=0 00:29:18.457 08:58:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:29:18.457 08:58:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:29:18.457 08:58:20 -- setup/common.sh@18 -- # local node= 00:29:18.457 08:58:20 -- setup/common.sh@19 -- # local var val 00:29:18.457 08:58:20 -- setup/common.sh@20 -- # local mem_f mem 00:29:18.457 08:58:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:18.457 08:58:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:18.457 08:58:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:18.457 08:58:20 -- setup/common.sh@28 -- # mapfile -t mem 00:29:18.457 08:58:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7967244 kB' 'MemAvailable: 9535312 kB' 'Buffers: 2436 kB' 'Cached: 1775532 kB' 'SwapCached: 0 kB' 'Active: 450592 kB' 'Inactive: 1446708 kB' 'Active(anon): 119140 kB' 'Inactive(anon): 10664 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1124 kB' 'Writeback: 0 kB' 'AnonPages: 119700 kB' 'Mapped: 48188 kB' 'Shmem: 10472 kB' 'KReclaimable: 75240 kB' 'Slab: 147812 kB' 'SReclaimable: 75240 kB' 'SUnreclaim: 72572 kB' 'KernelStack: 4524 kB' 'PageTables: 3040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456144 kB' 'Committed_AS: 339604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53168 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.457 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.457 08:58:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.458 08:58:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:29:18.458 08:58:20 -- setup/common.sh@33 -- # echo 0 00:29:18.458 08:58:20 -- setup/common.sh@33 -- # return 0 00:29:18.458 08:58:20 -- setup/hugepages.sh@100 -- # resv=0 00:29:18.458 08:58:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:29:18.458 nr_hugepages=1024 00:29:18.458 08:58:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:29:18.458 resv_hugepages=0 00:29:18.458 08:58:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:29:18.458 surplus_hugepages=0 00:29:18.458 08:58:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:29:18.458 anon_hugepages=0 00:29:18.458 08:58:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:29:18.458 08:58:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:29:18.458 08:58:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:29:18.458 08:58:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:29:18.458 08:58:20 -- setup/common.sh@18 -- # local node= 00:29:18.458 08:58:20 -- setup/common.sh@19 -- # local var val 00:29:18.458 08:58:20 -- setup/common.sh@20 -- # local mem_f mem 00:29:18.458 08:58:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:18.458 08:58:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:29:18.458 08:58:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:29:18.458 08:58:20 -- setup/common.sh@28 -- # mapfile -t mem 00:29:18.458 08:58:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.458 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7967244 kB' 'MemAvailable: 9535312 kB' 'Buffers: 2436 kB' 'Cached: 1775532 kB' 'SwapCached: 0 kB' 'Active: 450788 kB' 'Inactive: 1446708 kB' 'Active(anon): 119336 kB' 'Inactive(anon): 10664 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1124 kB' 'Writeback: 0 kB' 'AnonPages: 119660 kB' 'Mapped: 48128 kB' 'Shmem: 10472 kB' 'KReclaimable: 75240 kB' 'Slab: 147812 kB' 'SReclaimable: 75240 kB' 'SUnreclaim: 72572 kB' 'KernelStack: 4508 kB' 'PageTables: 3000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456144 kB' 'Committed_AS: 339604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53152 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 7157760 kB' 'DirectMap1G: 7340032 kB' 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.459 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.459 08:58:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:29:18.460 08:58:20 -- setup/common.sh@33 -- # echo 1024 00:29:18.460 08:58:20 -- setup/common.sh@33 -- # return 0 00:29:18.460 08:58:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:29:18.460 08:58:20 -- setup/hugepages.sh@112 -- # get_nodes 00:29:18.460 08:58:20 -- setup/hugepages.sh@27 -- # local node 00:29:18.460 08:58:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:29:18.460 08:58:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:29:18.460 08:58:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:29:18.460 08:58:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:29:18.460 08:58:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:29:18.460 08:58:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:29:18.460 08:58:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:29:18.460 08:58:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:29:18.460 08:58:20 -- setup/common.sh@18 -- # local node=0 00:29:18.460 08:58:20 -- setup/common.sh@19 -- # local var val 00:29:18.460 08:58:20 -- setup/common.sh@20 -- # local mem_f mem 00:29:18.460 08:58:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:29:18.460 08:58:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:29:18.460 08:58:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:29:18.460 08:58:20 -- setup/common.sh@28 -- # mapfile -t mem 00:29:18.460 08:58:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232236 kB' 'MemFree: 7967244 kB' 'MemUsed: 4264992 kB' 'SwapCached: 0 kB' 'Active: 450508 kB' 'Inactive: 1446708 kB' 'Active(anon): 119056 kB' 'Inactive(anon): 10664 kB' 'Active(file): 331452 kB' 'Inactive(file): 1436044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 1124 kB' 'Writeback: 0 kB' 'FilePages: 1777968 kB' 'Mapped: 48128 kB' 'AnonPages: 119572 kB' 'Shmem: 10472 kB' 'KernelStack: 4492 kB' 'PageTables: 2956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75240 kB' 'Slab: 147812 kB' 'SReclaimable: 75240 kB' 'SUnreclaim: 72572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.460 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.460 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # continue 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:29:18.461 08:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:29:18.461 08:58:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:29:18.461 08:58:20 -- setup/common.sh@33 -- # echo 0 00:29:18.461 08:58:20 -- setup/common.sh@33 -- # return 0 00:29:18.461 08:58:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:29:18.461 08:58:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:29:18.461 08:58:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:29:18.461 08:58:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:29:18.461 08:58:20 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:29:18.461 node0=1024 expecting 1024 00:29:18.461 08:58:20 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:29:18.461 00:29:18.461 real 0m1.830s 00:29:18.461 user 0m0.748s 00:29:18.461 sys 0m1.005s 00:29:18.461 08:58:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:18.461 08:58:20 -- common/autotest_common.sh@10 -- # set +x 00:29:18.461 ************************************ 00:29:18.461 END TEST no_shrink_alloc 00:29:18.461 ************************************ 00:29:18.721 08:58:20 -- setup/hugepages.sh@217 -- # clear_hp 00:29:18.721 08:58:20 -- setup/hugepages.sh@37 -- # local node hp 00:29:18.721 08:58:20 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:29:18.721 08:58:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:29:18.721 08:58:20 -- setup/hugepages.sh@41 -- # echo 0 00:29:18.721 08:58:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:29:18.721 08:58:20 -- setup/hugepages.sh@41 -- # echo 0 00:29:18.721 08:58:20 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:29:18.721 08:58:20 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:29:18.721 ************************************ 00:29:18.721 END TEST hugepages 00:29:18.721 ************************************ 00:29:18.721 00:29:18.721 real 0m8.247s 00:29:18.721 user 0m3.215s 00:29:18.721 sys 0m4.523s 00:29:18.721 08:58:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:18.721 08:58:20 -- common/autotest_common.sh@10 -- # set +x 00:29:18.721 08:58:20 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:29:18.721 08:58:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:18.721 08:58:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:18.721 08:58:20 -- common/autotest_common.sh@10 -- # set +x 00:29:18.721 ************************************ 00:29:18.721 START TEST driver 00:29:18.721 ************************************ 00:29:18.721 08:58:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:29:18.721 * Looking for test storage... 00:29:18.979 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:29:18.980 08:58:20 -- setup/driver.sh@68 -- # setup reset 00:29:18.980 08:58:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:29:18.980 08:58:20 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:25.616 08:58:27 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:29:25.616 08:58:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:25.616 08:58:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:25.616 08:58:27 -- common/autotest_common.sh@10 -- # set +x 00:29:25.616 ************************************ 00:29:25.616 START TEST guess_driver 00:29:25.616 ************************************ 00:29:25.616 08:58:27 -- common/autotest_common.sh@1111 -- # guess_driver 00:29:25.616 08:58:27 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:29:25.616 08:58:27 -- setup/driver.sh@47 -- # local fail=0 00:29:25.616 08:58:27 -- setup/driver.sh@49 -- # pick_driver 00:29:25.616 08:58:27 -- setup/driver.sh@36 -- # vfio 00:29:25.616 08:58:27 -- setup/driver.sh@21 -- # local iommu_grups 00:29:25.616 08:58:27 -- setup/driver.sh@22 -- # local unsafe_vfio 00:29:25.616 08:58:27 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:29:25.616 08:58:27 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:29:25.616 08:58:27 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:29:25.616 08:58:27 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:29:25.616 08:58:27 -- setup/driver.sh@32 -- # return 1 00:29:25.616 08:58:27 -- setup/driver.sh@38 -- # uio 00:29:25.616 08:58:27 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:29:25.616 08:58:27 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:29:25.616 08:58:27 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:29:25.616 08:58:27 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:29:25.616 08:58:27 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.5.12-200.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:29:25.616 insmod /lib/modules/6.5.12-200.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:29:25.616 08:58:27 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:29:25.616 08:58:27 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:29:25.616 08:58:27 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:29:25.616 08:58:27 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:29:25.616 Looking for driver=uio_pci_generic 00:29:25.616 08:58:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:29:25.616 08:58:27 -- setup/driver.sh@45 -- # setup output config 00:29:25.616 08:58:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:29:25.616 08:58:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:29:25.616 08:58:27 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:29:25.616 08:58:27 -- setup/driver.sh@58 -- # continue 00:29:25.616 08:58:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:29:26.550 08:58:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:29:26.550 08:58:28 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:29:26.550 08:58:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:29:26.550 08:58:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:29:26.550 08:58:28 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:29:26.550 08:58:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:29:26.550 08:58:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:29:26.550 08:58:28 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:29:26.550 08:58:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:29:26.550 08:58:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:29:26.550 08:58:28 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:29:26.550 08:58:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:29:26.550 08:58:28 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:29:26.550 08:58:28 -- setup/driver.sh@65 -- # setup reset 00:29:26.550 08:58:28 -- setup/common.sh@9 -- # [[ reset == output ]] 00:29:26.550 08:58:28 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:33.185 00:29:33.185 real 0m7.561s 00:29:33.185 user 0m0.848s 00:29:33.185 sys 0m1.814s 00:29:33.185 08:58:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:33.185 ************************************ 00:29:33.185 END TEST guess_driver 00:29:33.185 ************************************ 00:29:33.185 08:58:34 -- common/autotest_common.sh@10 -- # set +x 00:29:33.185 ************************************ 00:29:33.185 END TEST driver 00:29:33.185 ************************************ 00:29:33.185 00:29:33.185 real 0m14.009s 00:29:33.185 user 0m1.292s 00:29:33.185 sys 0m2.937s 00:29:33.185 08:58:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:33.185 08:58:34 -- common/autotest_common.sh@10 -- # set +x 00:29:33.185 08:58:34 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:29:33.185 08:58:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:33.185 08:58:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:33.185 08:58:34 -- common/autotest_common.sh@10 -- # set +x 00:29:33.186 ************************************ 00:29:33.186 START TEST devices 00:29:33.186 ************************************ 00:29:33.186 08:58:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:29:33.186 * Looking for test storage... 00:29:33.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:29:33.186 08:58:34 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:29:33.186 08:58:34 -- setup/devices.sh@192 -- # setup reset 00:29:33.186 08:58:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:29:33.186 08:58:34 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:34.121 08:58:36 -- setup/devices.sh@194 -- # get_zoned_devs 00:29:34.121 08:58:36 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:29:34.121 08:58:36 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:29:34.121 08:58:36 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:29:34.121 08:58:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:34.121 08:58:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:29:34.121 08:58:36 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:29:34.121 08:58:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:34.121 08:58:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:34.121 08:58:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:34.121 08:58:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:29:34.121 08:58:36 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:29:34.121 08:58:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:34.121 08:58:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:34.121 08:58:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:34.121 08:58:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:29:34.121 08:58:36 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:29:34.121 08:58:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:29:34.121 08:58:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:34.121 08:58:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:34.121 08:58:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:29:34.121 08:58:36 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:29:34.122 08:58:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:29:34.122 08:58:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:34.122 08:58:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:34.122 08:58:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:29:34.122 08:58:36 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:29:34.122 08:58:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:29:34.122 08:58:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:34.122 08:58:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:34.122 08:58:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:29:34.122 08:58:36 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:29:34.122 08:58:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:29:34.122 08:58:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:34.122 08:58:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:34.122 08:58:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:29:34.122 08:58:36 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:29:34.122 08:58:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:29:34.122 08:58:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:34.122 08:58:36 -- setup/devices.sh@196 -- # blocks=() 00:29:34.122 08:58:36 -- setup/devices.sh@196 -- # declare -a blocks 00:29:34.122 08:58:36 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:29:34.122 08:58:36 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:29:34.122 08:58:36 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:29:34.122 08:58:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:29:34.122 08:58:36 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:29:34.122 08:58:36 -- setup/devices.sh@201 -- # ctrl=nvme0 00:29:34.122 08:58:36 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:29:34.122 08:58:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:29:34.122 08:58:36 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:29:34.122 08:58:36 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:34.122 08:58:36 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:29:34.380 No valid GPT data, bailing 00:29:34.380 08:58:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:34.380 08:58:36 -- scripts/common.sh@391 -- # pt= 00:29:34.380 08:58:36 -- scripts/common.sh@392 -- # return 1 00:29:34.380 08:58:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:29:34.380 08:58:36 -- setup/common.sh@76 -- # local dev=nvme0n1 00:29:34.380 08:58:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:34.380 08:58:36 -- setup/common.sh@80 -- # echo 5368709120 00:29:34.380 08:58:36 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:29:34.380 08:58:36 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:29:34.380 08:58:36 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:29:34.380 08:58:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:29:34.380 08:58:36 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:29:34.380 08:58:36 -- setup/devices.sh@201 -- # ctrl=nvme1 00:29:34.380 08:58:36 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:29:34.380 08:58:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:29:34.380 08:58:36 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:29:34.380 08:58:36 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:29:34.380 08:58:36 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:29:34.380 No valid GPT data, bailing 00:29:34.380 08:58:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:29:34.380 08:58:36 -- scripts/common.sh@391 -- # pt= 00:29:34.380 08:58:36 -- scripts/common.sh@392 -- # return 1 00:29:34.380 08:58:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:29:34.380 08:58:36 -- setup/common.sh@76 -- # local dev=nvme1n1 00:29:34.380 08:58:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:29:34.380 08:58:36 -- setup/common.sh@80 -- # echo 6343335936 00:29:34.380 08:58:36 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:29:34.380 08:58:36 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:29:34.380 08:58:36 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:29:34.380 08:58:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:29:34.380 08:58:36 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:29:34.380 08:58:36 -- setup/devices.sh@201 -- # ctrl=nvme2 00:29:34.380 08:58:36 -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:29:34.380 08:58:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:29:34.380 08:58:36 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:29:34.380 08:58:36 -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:29:34.380 08:58:36 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:29:34.380 No valid GPT data, bailing 00:29:34.380 08:58:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:29:34.639 08:58:36 -- scripts/common.sh@391 -- # pt= 00:29:34.639 08:58:36 -- scripts/common.sh@392 -- # return 1 00:29:34.639 08:58:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:29:34.639 08:58:36 -- setup/common.sh@76 -- # local dev=nvme2n1 00:29:34.639 08:58:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:29:34.639 08:58:36 -- setup/common.sh@80 -- # echo 4294967296 00:29:34.639 08:58:36 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:29:34.639 08:58:36 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:29:34.639 08:58:36 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:29:34.639 08:58:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:29:34.639 08:58:36 -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:29:34.639 08:58:36 -- setup/devices.sh@201 -- # ctrl=nvme2 00:29:34.639 08:58:36 -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:29:34.639 08:58:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:29:34.639 08:58:36 -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:29:34.639 08:58:36 -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:29:34.639 08:58:36 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:29:34.639 No valid GPT data, bailing 00:29:34.639 08:58:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:29:34.639 08:58:36 -- scripts/common.sh@391 -- # pt= 00:29:34.639 08:58:36 -- scripts/common.sh@392 -- # return 1 00:29:34.639 08:58:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:29:34.639 08:58:36 -- setup/common.sh@76 -- # local dev=nvme2n2 00:29:34.639 08:58:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:29:34.639 08:58:36 -- setup/common.sh@80 -- # echo 4294967296 00:29:34.639 08:58:36 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:29:34.639 08:58:36 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:29:34.639 08:58:36 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:29:34.639 08:58:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:29:34.639 08:58:36 -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:29:34.639 08:58:36 -- setup/devices.sh@201 -- # ctrl=nvme2 00:29:34.639 08:58:36 -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:29:34.639 08:58:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:29:34.639 08:58:36 -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:29:34.639 08:58:36 -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:29:34.639 08:58:36 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:29:34.639 No valid GPT data, bailing 00:29:34.639 08:58:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:29:34.639 08:58:36 -- scripts/common.sh@391 -- # pt= 00:29:34.639 08:58:36 -- scripts/common.sh@392 -- # return 1 00:29:34.639 08:58:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:29:34.639 08:58:36 -- setup/common.sh@76 -- # local dev=nvme2n3 00:29:34.639 08:58:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:29:34.639 08:58:36 -- setup/common.sh@80 -- # echo 4294967296 00:29:34.639 08:58:36 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:29:34.639 08:58:36 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:29:34.639 08:58:36 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:29:34.639 08:58:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:29:34.639 08:58:36 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:29:34.639 08:58:36 -- setup/devices.sh@201 -- # ctrl=nvme3 00:29:34.639 08:58:36 -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:29:34.639 08:58:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:29:34.639 08:58:36 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:29:34.639 08:58:36 -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:29:34.639 08:58:36 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:29:34.639 No valid GPT data, bailing 00:29:34.639 08:58:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:29:34.896 08:58:36 -- scripts/common.sh@391 -- # pt= 00:29:34.896 08:58:36 -- scripts/common.sh@392 -- # return 1 00:29:34.897 08:58:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:29:34.897 08:58:36 -- setup/common.sh@76 -- # local dev=nvme3n1 00:29:34.897 08:58:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:29:34.897 08:58:36 -- setup/common.sh@80 -- # echo 1073741824 00:29:34.897 08:58:36 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:29:34.897 08:58:36 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:29:34.897 08:58:36 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:29:34.897 08:58:36 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:29:34.897 08:58:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:34.897 08:58:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:34.897 08:58:36 -- common/autotest_common.sh@10 -- # set +x 00:29:34.897 ************************************ 00:29:34.897 START TEST nvme_mount 00:29:34.897 ************************************ 00:29:34.897 08:58:36 -- common/autotest_common.sh@1111 -- # nvme_mount 00:29:34.897 08:58:36 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:29:34.897 08:58:36 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:29:34.897 08:58:36 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:29:34.897 08:58:36 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:29:34.897 08:58:36 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:29:34.897 08:58:36 -- setup/common.sh@39 -- # local disk=nvme0n1 00:29:34.897 08:58:36 -- setup/common.sh@40 -- # local part_no=1 00:29:34.897 08:58:36 -- setup/common.sh@41 -- # local size=1073741824 00:29:34.897 08:58:36 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:29:34.897 08:58:36 -- setup/common.sh@44 -- # parts=() 00:29:34.897 08:58:36 -- setup/common.sh@44 -- # local parts 00:29:34.897 08:58:36 -- setup/common.sh@46 -- # (( part = 1 )) 00:29:34.897 08:58:36 -- setup/common.sh@46 -- # (( part <= part_no )) 00:29:34.897 08:58:36 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:29:34.897 08:58:36 -- setup/common.sh@46 -- # (( part++ )) 00:29:34.897 08:58:36 -- setup/common.sh@46 -- # (( part <= part_no )) 00:29:34.897 08:58:36 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:29:34.897 08:58:36 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:29:34.897 08:58:36 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:29:35.911 Creating new GPT entries in memory. 00:29:35.911 GPT data structures destroyed! You may now partition the disk using fdisk or 00:29:35.911 other utilities. 00:29:35.911 08:58:37 -- setup/common.sh@57 -- # (( part = 1 )) 00:29:35.911 08:58:37 -- setup/common.sh@57 -- # (( part <= part_no )) 00:29:35.911 08:58:37 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:29:35.911 08:58:37 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:29:35.911 08:58:37 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:29:36.847 Creating new GPT entries in memory. 00:29:36.847 The operation has completed successfully. 00:29:36.847 08:58:38 -- setup/common.sh@57 -- # (( part++ )) 00:29:36.847 08:58:38 -- setup/common.sh@57 -- # (( part <= part_no )) 00:29:36.847 08:58:38 -- setup/common.sh@62 -- # wait 58816 00:29:36.847 08:58:38 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:29:36.847 08:58:38 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:29:36.847 08:58:38 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:29:36.847 08:58:38 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:29:36.847 08:58:38 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:29:37.106 08:58:38 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:29:37.106 08:58:38 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:29:37.106 08:58:38 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:29:37.106 08:58:38 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:29:37.106 08:58:38 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:29:37.106 08:58:38 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:29:37.106 08:58:38 -- setup/devices.sh@53 -- # local found=0 00:29:37.106 08:58:38 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:29:37.106 08:58:38 -- setup/devices.sh@56 -- # : 00:29:37.106 08:58:38 -- setup/devices.sh@59 -- # local pci status 00:29:37.106 08:58:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:37.106 08:58:38 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:29:37.106 08:58:38 -- setup/devices.sh@47 -- # setup output config 00:29:37.106 08:58:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:29:37.106 08:58:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:29:37.365 08:58:39 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:37.365 08:58:39 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:29:37.365 08:58:39 -- setup/devices.sh@63 -- # found=1 00:29:37.365 08:58:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:37.365 08:58:39 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:37.365 08:58:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:37.365 08:58:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:37.365 08:58:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:37.365 08:58:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:37.365 08:58:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:37.365 08:58:39 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:37.365 08:58:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:37.932 08:58:39 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:37.932 08:58:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:37.932 08:58:39 -- setup/devices.sh@66 -- # (( found == 1 )) 00:29:37.932 08:58:39 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:29:37.932 08:58:39 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:29:37.932 08:58:39 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:29:37.932 08:58:39 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:29:37.932 08:58:39 -- setup/devices.sh@110 -- # cleanup_nvme 00:29:37.932 08:58:39 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:29:37.932 08:58:39 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:29:37.932 08:58:40 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:29:37.932 08:58:40 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:29:37.932 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:29:37.932 08:58:40 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:29:37.932 08:58:40 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:29:38.190 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:29:38.190 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:29:38.190 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:29:38.190 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:29:38.190 08:58:40 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:29:38.190 08:58:40 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:29:38.190 08:58:40 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:29:38.492 08:58:40 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:29:38.492 08:58:40 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:29:38.492 08:58:40 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:29:38.492 08:58:40 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:29:38.492 08:58:40 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:29:38.492 08:58:40 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:29:38.492 08:58:40 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:29:38.492 08:58:40 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:29:38.492 08:58:40 -- setup/devices.sh@53 -- # local found=0 00:29:38.492 08:58:40 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:29:38.492 08:58:40 -- setup/devices.sh@56 -- # : 00:29:38.492 08:58:40 -- setup/devices.sh@59 -- # local pci status 00:29:38.492 08:58:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:38.492 08:58:40 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:29:38.492 08:58:40 -- setup/devices.sh@47 -- # setup output config 00:29:38.492 08:58:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:29:38.492 08:58:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:29:38.492 08:58:40 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:38.492 08:58:40 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:29:38.492 08:58:40 -- setup/devices.sh@63 -- # found=1 00:29:38.492 08:58:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:38.492 08:58:40 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:38.492 08:58:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:38.778 08:58:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:38.778 08:58:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:38.778 08:58:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:38.778 08:58:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:38.778 08:58:40 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:38.778 08:58:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:39.037 08:58:41 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:39.037 08:58:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:39.295 08:58:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:29:39.295 08:58:41 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:29:39.295 08:58:41 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:29:39.295 08:58:41 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:29:39.295 08:58:41 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:29:39.295 08:58:41 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:29:39.295 08:58:41 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:29:39.295 08:58:41 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:29:39.295 08:58:41 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:29:39.295 08:58:41 -- setup/devices.sh@50 -- # local mount_point= 00:29:39.295 08:58:41 -- setup/devices.sh@51 -- # local test_file= 00:29:39.295 08:58:41 -- setup/devices.sh@53 -- # local found=0 00:29:39.295 08:58:41 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:29:39.295 08:58:41 -- setup/devices.sh@59 -- # local pci status 00:29:39.295 08:58:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:39.295 08:58:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:29:39.295 08:58:41 -- setup/devices.sh@47 -- # setup output config 00:29:39.295 08:58:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:29:39.295 08:58:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:29:39.863 08:58:41 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:39.863 08:58:41 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:29:39.863 08:58:41 -- setup/devices.sh@63 -- # found=1 00:29:39.863 08:58:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:39.863 08:58:41 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:39.863 08:58:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:39.863 08:58:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:39.863 08:58:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:39.863 08:58:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:39.863 08:58:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:39.863 08:58:41 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:39.863 08:58:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:40.431 08:58:42 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:40.431 08:58:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:40.431 08:58:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:29:40.431 08:58:42 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:29:40.431 08:58:42 -- setup/devices.sh@68 -- # return 0 00:29:40.431 08:58:42 -- setup/devices.sh@128 -- # cleanup_nvme 00:29:40.431 08:58:42 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:29:40.431 08:58:42 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:29:40.431 08:58:42 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:29:40.431 08:58:42 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:29:40.431 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:29:40.431 00:29:40.431 real 0m5.647s 00:29:40.431 user 0m1.496s 00:29:40.431 sys 0m1.854s 00:29:40.431 08:58:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:40.431 ************************************ 00:29:40.431 08:58:42 -- common/autotest_common.sh@10 -- # set +x 00:29:40.431 END TEST nvme_mount 00:29:40.431 ************************************ 00:29:40.690 08:58:42 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:29:40.690 08:58:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:40.690 08:58:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:40.690 08:58:42 -- common/autotest_common.sh@10 -- # set +x 00:29:40.690 ************************************ 00:29:40.690 START TEST dm_mount 00:29:40.690 ************************************ 00:29:40.690 08:58:42 -- common/autotest_common.sh@1111 -- # dm_mount 00:29:40.690 08:58:42 -- setup/devices.sh@144 -- # pv=nvme0n1 00:29:40.690 08:58:42 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:29:40.690 08:58:42 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:29:40.690 08:58:42 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:29:40.690 08:58:42 -- setup/common.sh@39 -- # local disk=nvme0n1 00:29:40.690 08:58:42 -- setup/common.sh@40 -- # local part_no=2 00:29:40.690 08:58:42 -- setup/common.sh@41 -- # local size=1073741824 00:29:40.690 08:58:42 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:29:40.691 08:58:42 -- setup/common.sh@44 -- # parts=() 00:29:40.691 08:58:42 -- setup/common.sh@44 -- # local parts 00:29:40.691 08:58:42 -- setup/common.sh@46 -- # (( part = 1 )) 00:29:40.691 08:58:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:29:40.691 08:58:42 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:29:40.691 08:58:42 -- setup/common.sh@46 -- # (( part++ )) 00:29:40.691 08:58:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:29:40.691 08:58:42 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:29:40.691 08:58:42 -- setup/common.sh@46 -- # (( part++ )) 00:29:40.691 08:58:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:29:40.691 08:58:42 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:29:40.691 08:58:42 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:29:40.691 08:58:42 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:29:41.666 Creating new GPT entries in memory. 00:29:41.666 GPT data structures destroyed! You may now partition the disk using fdisk or 00:29:41.666 other utilities. 00:29:41.666 08:58:43 -- setup/common.sh@57 -- # (( part = 1 )) 00:29:41.666 08:58:43 -- setup/common.sh@57 -- # (( part <= part_no )) 00:29:41.666 08:58:43 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:29:41.666 08:58:43 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:29:41.666 08:58:43 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:29:42.602 Creating new GPT entries in memory. 00:29:42.602 The operation has completed successfully. 00:29:42.602 08:58:44 -- setup/common.sh@57 -- # (( part++ )) 00:29:42.602 08:58:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:29:42.602 08:58:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:29:42.602 08:58:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:29:42.602 08:58:44 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:29:43.986 The operation has completed successfully. 00:29:43.986 08:58:45 -- setup/common.sh@57 -- # (( part++ )) 00:29:43.986 08:58:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:29:43.986 08:58:45 -- setup/common.sh@62 -- # wait 59457 00:29:43.986 08:58:45 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:29:43.986 08:58:45 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:29:43.986 08:58:45 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:29:43.986 08:58:45 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:29:43.986 08:58:45 -- setup/devices.sh@160 -- # for t in {1..5} 00:29:43.986 08:58:45 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:29:43.986 08:58:45 -- setup/devices.sh@161 -- # break 00:29:43.986 08:58:45 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:29:43.986 08:58:45 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:29:43.986 08:58:45 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:29:43.986 08:58:45 -- setup/devices.sh@166 -- # dm=dm-0 00:29:43.986 08:58:45 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:29:43.986 08:58:45 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:29:43.986 08:58:45 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:29:43.986 08:58:45 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:29:43.986 08:58:45 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:29:43.986 08:58:45 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:29:43.986 08:58:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:29:43.986 08:58:45 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:29:43.986 08:58:45 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:29:43.986 08:58:45 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:29:43.986 08:58:45 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:29:43.986 08:58:45 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:29:43.986 08:58:45 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:29:43.986 08:58:45 -- setup/devices.sh@53 -- # local found=0 00:29:43.986 08:58:45 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:29:43.986 08:58:45 -- setup/devices.sh@56 -- # : 00:29:43.986 08:58:45 -- setup/devices.sh@59 -- # local pci status 00:29:43.986 08:58:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:43.986 08:58:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:29:43.986 08:58:45 -- setup/devices.sh@47 -- # setup output config 00:29:43.986 08:58:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:29:43.986 08:58:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:29:43.986 08:58:46 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:43.986 08:58:46 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:29:43.986 08:58:46 -- setup/devices.sh@63 -- # found=1 00:29:43.986 08:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:43.987 08:58:46 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:43.987 08:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:44.245 08:58:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:44.245 08:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:44.245 08:58:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:44.245 08:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:44.245 08:58:46 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:44.245 08:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:44.504 08:58:46 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:44.504 08:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:44.763 08:58:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:29:44.763 08:58:46 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:29:44.763 08:58:46 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:29:44.763 08:58:46 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:29:44.763 08:58:46 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:29:44.763 08:58:46 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:29:44.763 08:58:46 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:29:44.763 08:58:46 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:29:44.763 08:58:46 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:29:44.763 08:58:46 -- setup/devices.sh@50 -- # local mount_point= 00:29:44.763 08:58:46 -- setup/devices.sh@51 -- # local test_file= 00:29:44.763 08:58:46 -- setup/devices.sh@53 -- # local found=0 00:29:44.763 08:58:46 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:29:44.763 08:58:46 -- setup/devices.sh@59 -- # local pci status 00:29:44.763 08:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:44.763 08:58:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:29:44.763 08:58:46 -- setup/devices.sh@47 -- # setup output config 00:29:44.763 08:58:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:29:44.763 08:58:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:29:45.023 08:58:47 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:45.023 08:58:47 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:29:45.023 08:58:47 -- setup/devices.sh@63 -- # found=1 00:29:45.023 08:58:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:45.023 08:58:47 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:45.023 08:58:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:45.280 08:58:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:45.280 08:58:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:45.280 08:58:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:45.280 08:58:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:45.280 08:58:47 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:45.280 08:58:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:45.847 08:58:47 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:45.847 08:58:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:29:45.847 08:58:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:29:45.847 08:58:47 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:29:45.847 08:58:47 -- setup/devices.sh@68 -- # return 0 00:29:45.847 08:58:47 -- setup/devices.sh@187 -- # cleanup_dm 00:29:45.847 08:58:47 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:29:45.847 08:58:47 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:29:45.847 08:58:47 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:29:46.105 08:58:47 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:29:46.105 08:58:47 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:29:46.105 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:29:46.105 08:58:47 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:29:46.105 08:58:47 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:29:46.105 00:29:46.105 real 0m5.364s 00:29:46.105 user 0m1.020s 00:29:46.105 sys 0m1.288s 00:29:46.105 08:58:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:46.105 08:58:47 -- common/autotest_common.sh@10 -- # set +x 00:29:46.105 ************************************ 00:29:46.105 END TEST dm_mount 00:29:46.105 ************************************ 00:29:46.105 08:58:48 -- setup/devices.sh@1 -- # cleanup 00:29:46.105 08:58:48 -- setup/devices.sh@11 -- # cleanup_nvme 00:29:46.105 08:58:48 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:29:46.105 08:58:48 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:29:46.105 08:58:48 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:29:46.105 08:58:48 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:29:46.105 08:58:48 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:29:46.363 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:29:46.363 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:29:46.363 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:29:46.363 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:29:46.363 08:58:48 -- setup/devices.sh@12 -- # cleanup_dm 00:29:46.363 08:58:48 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:29:46.363 08:58:48 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:29:46.363 08:58:48 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:29:46.363 08:58:48 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:29:46.363 08:58:48 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:29:46.363 08:58:48 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:29:46.363 00:29:46.363 real 0m13.490s 00:29:46.363 user 0m3.548s 00:29:46.363 sys 0m4.253s 00:29:46.363 08:58:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:46.363 08:58:48 -- common/autotest_common.sh@10 -- # set +x 00:29:46.363 ************************************ 00:29:46.363 END TEST devices 00:29:46.363 ************************************ 00:29:46.363 00:29:46.363 real 0m49.904s 00:29:46.363 user 0m11.719s 00:29:46.363 sys 0m17.206s 00:29:46.363 08:58:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:46.363 08:58:48 -- common/autotest_common.sh@10 -- # set +x 00:29:46.363 ************************************ 00:29:46.363 END TEST setup.sh 00:29:46.363 ************************************ 00:29:46.363 08:58:48 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:29:46.931 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:47.497 Hugepages 00:29:47.497 node hugesize free / total 00:29:47.497 node0 1048576kB 0 / 0 00:29:47.497 node0 2048kB 2048 / 2048 00:29:47.497 00:29:47.497 Type BDF Vendor Device NUMA Driver Device Block devices 00:29:47.754 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:29:47.754 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:29:47.755 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:29:48.012 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:29:48.012 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:29:48.012 08:58:49 -- spdk/autotest.sh@130 -- # uname -s 00:29:48.012 08:58:49 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:29:48.012 08:58:49 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:29:48.012 08:58:49 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:48.580 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:49.146 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:29:49.146 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:49.146 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:49.404 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:29:49.404 08:58:51 -- common/autotest_common.sh@1518 -- # sleep 1 00:29:50.337 08:58:52 -- common/autotest_common.sh@1519 -- # bdfs=() 00:29:50.337 08:58:52 -- common/autotest_common.sh@1519 -- # local bdfs 00:29:50.337 08:58:52 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:29:50.337 08:58:52 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:29:50.337 08:58:52 -- common/autotest_common.sh@1499 -- # bdfs=() 00:29:50.337 08:58:52 -- common/autotest_common.sh@1499 -- # local bdfs 00:29:50.337 08:58:52 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:50.337 08:58:52 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:50.338 08:58:52 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:29:50.596 08:58:52 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:29:50.596 08:58:52 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:29:50.596 08:58:52 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:50.855 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:51.113 Waiting for block devices as requested 00:29:51.113 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:51.371 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:51.371 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:51.642 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:56.935 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:56.935 08:58:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:29:56.935 08:58:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:29:56.935 08:58:58 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:29:56.935 08:58:58 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:29:56.935 08:58:58 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:29:56.935 08:58:58 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:29:56.935 08:58:58 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:29:56.935 08:58:58 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:29:56.935 08:58:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:29:56.935 08:58:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:29:56.935 08:58:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:29:56.935 08:58:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:29:56.935 08:58:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:29:56.935 08:58:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:29:56.935 08:58:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:29:56.935 08:58:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:29:56.935 08:58:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:29:56.935 08:58:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:29:56.935 08:58:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:29:56.935 08:58:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:29:56.935 08:58:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:29:56.935 08:58:58 -- common/autotest_common.sh@1543 -- # continue 00:29:56.935 08:58:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:29:56.935 08:58:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:29:56.935 08:58:58 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:29:56.935 08:58:58 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:29:56.935 08:58:58 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:29:56.935 08:58:58 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:29:56.935 08:58:58 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:29:56.935 08:58:58 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:29:56.935 08:58:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:29:56.935 08:58:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:29:56.935 08:58:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:29:56.935 08:58:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:29:56.935 08:58:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:29:56.935 08:58:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:29:56.935 08:58:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:29:56.935 08:58:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:29:56.935 08:58:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:29:56.935 08:58:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:29:56.935 08:58:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:29:56.935 08:58:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:29:56.935 08:58:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:29:56.935 08:58:58 -- common/autotest_common.sh@1543 -- # continue 00:29:56.935 08:58:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:29:56.935 08:58:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:29:56.935 08:58:58 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:29:56.935 08:58:58 -- common/autotest_common.sh@1488 -- # grep 0000:00:12.0/nvme/nvme 00:29:56.935 08:58:58 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:29:56.935 08:58:58 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:29:56.935 08:58:58 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:29:56.936 08:58:58 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme2 00:29:56.936 08:58:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:29:56.936 08:58:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:29:56.936 08:58:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:29:56.936 08:58:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:29:56.936 08:58:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:29:56.936 08:58:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:29:56.936 08:58:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:29:56.936 08:58:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:29:56.936 08:58:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:29:56.936 08:58:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:29:56.936 08:58:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:29:56.936 08:58:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:29:56.936 08:58:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:29:56.936 08:58:58 -- common/autotest_common.sh@1543 -- # continue 00:29:56.936 08:58:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:29:56.936 08:58:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:29:56.936 08:58:58 -- common/autotest_common.sh@1488 -- # grep 0000:00:13.0/nvme/nvme 00:29:56.936 08:58:58 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:29:56.936 08:58:58 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:29:56.936 08:58:58 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:29:56.936 08:58:58 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:29:56.936 08:58:58 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme3 00:29:56.936 08:58:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:29:56.936 08:58:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:29:56.936 08:58:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:29:56.936 08:58:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:29:56.936 08:58:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:29:56.936 08:58:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:29:56.936 08:58:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:29:56.936 08:58:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:29:56.936 08:58:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:29:56.936 08:58:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:29:56.936 08:58:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:29:56.936 08:58:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:29:56.936 08:58:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:29:56.936 08:58:58 -- common/autotest_common.sh@1543 -- # continue 00:29:56.936 08:58:58 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:29:56.936 08:58:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:56.936 08:58:58 -- common/autotest_common.sh@10 -- # set +x 00:29:56.936 08:58:58 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:29:56.936 08:58:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:56.936 08:58:58 -- common/autotest_common.sh@10 -- # set +x 00:29:56.936 08:58:58 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:57.503 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:58.069 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:58.069 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:29:58.069 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:58.328 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:29:58.328 08:59:00 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:29:58.328 08:59:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:58.328 08:59:00 -- common/autotest_common.sh@10 -- # set +x 00:29:58.328 08:59:00 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:29:58.328 08:59:00 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:29:58.328 08:59:00 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:29:58.328 08:59:00 -- common/autotest_common.sh@1563 -- # bdfs=() 00:29:58.328 08:59:00 -- common/autotest_common.sh@1563 -- # local bdfs 00:29:58.328 08:59:00 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:29:58.328 08:59:00 -- common/autotest_common.sh@1499 -- # bdfs=() 00:29:58.328 08:59:00 -- common/autotest_common.sh@1499 -- # local bdfs 00:29:58.328 08:59:00 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:58.328 08:59:00 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:58.328 08:59:00 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:29:58.328 08:59:00 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:29:58.328 08:59:00 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:29:58.328 08:59:00 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:29:58.328 08:59:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:29:58.328 08:59:00 -- common/autotest_common.sh@1566 -- # device=0x0010 00:29:58.328 08:59:00 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:29:58.328 08:59:00 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:29:58.328 08:59:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:29:58.328 08:59:00 -- common/autotest_common.sh@1566 -- # device=0x0010 00:29:58.328 08:59:00 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:29:58.328 08:59:00 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:29:58.328 08:59:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:29:58.328 08:59:00 -- common/autotest_common.sh@1566 -- # device=0x0010 00:29:58.328 08:59:00 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:29:58.328 08:59:00 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:29:58.328 08:59:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:29:58.328 08:59:00 -- common/autotest_common.sh@1566 -- # device=0x0010 00:29:58.328 08:59:00 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:29:58.328 08:59:00 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:29:58.328 08:59:00 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:29:58.328 08:59:00 -- common/autotest_common.sh@1579 -- # return 0 00:29:58.328 08:59:00 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:29:58.328 08:59:00 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:29:58.328 08:59:00 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:29:58.329 08:59:00 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:29:58.329 08:59:00 -- spdk/autotest.sh@162 -- # timing_enter lib 00:29:58.329 08:59:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:58.329 08:59:00 -- common/autotest_common.sh@10 -- # set +x 00:29:58.329 08:59:00 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:29:58.329 08:59:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:58.329 08:59:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:58.329 08:59:00 -- common/autotest_common.sh@10 -- # set +x 00:29:58.587 ************************************ 00:29:58.587 START TEST env 00:29:58.587 ************************************ 00:29:58.587 08:59:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:29:58.587 * Looking for test storage... 00:29:58.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:29:58.587 08:59:00 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:29:58.587 08:59:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:58.587 08:59:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:58.587 08:59:00 -- common/autotest_common.sh@10 -- # set +x 00:29:58.587 ************************************ 00:29:58.587 START TEST env_memory 00:29:58.587 ************************************ 00:29:58.587 08:59:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:29:58.587 00:29:58.587 00:29:58.587 CUnit - A unit testing framework for C - Version 2.1-3 00:29:58.587 http://cunit.sourceforge.net/ 00:29:58.587 00:29:58.587 00:29:58.587 Suite: memory 00:29:58.845 Test: alloc and free memory map ...[2024-04-18 08:59:00.706104] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:29:58.845 passed 00:29:58.845 Test: mem map translation ...[2024-04-18 08:59:00.759563] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:29:58.845 [2024-04-18 08:59:00.759674] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:29:58.845 [2024-04-18 08:59:00.759786] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:29:58.845 [2024-04-18 08:59:00.759835] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:29:58.845 passed 00:29:58.845 Test: mem map registration ...[2024-04-18 08:59:00.837945] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:29:58.845 [2024-04-18 08:59:00.838062] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:29:58.845 passed 00:29:58.845 Test: mem map adjacent registrations ...passed 00:29:58.845 00:29:58.845 Run Summary: Type Total Ran Passed Failed Inactive 00:29:58.845 suites 1 1 n/a 0 0 00:29:58.845 tests 4 4 4 0 0 00:29:58.845 asserts 152 152 152 0 n/a 00:29:58.845 00:29:58.845 Elapsed time = 0.274 seconds 00:29:59.103 00:29:59.103 real 0m0.311s 00:29:59.103 user 0m0.280s 00:29:59.103 sys 0m0.028s 00:29:59.103 08:59:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:59.103 08:59:00 -- common/autotest_common.sh@10 -- # set +x 00:29:59.103 ************************************ 00:29:59.103 END TEST env_memory 00:29:59.103 ************************************ 00:29:59.103 08:59:01 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:29:59.103 08:59:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:59.103 08:59:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:59.103 08:59:01 -- common/autotest_common.sh@10 -- # set +x 00:29:59.103 ************************************ 00:29:59.103 START TEST env_vtophys 00:29:59.103 ************************************ 00:29:59.103 08:59:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:29:59.103 EAL: lib.eal log level changed from notice to debug 00:29:59.103 EAL: Detected lcore 0 as core 0 on socket 0 00:29:59.103 EAL: Detected lcore 1 as core 0 on socket 0 00:29:59.103 EAL: Detected lcore 2 as core 0 on socket 0 00:29:59.103 EAL: Detected lcore 3 as core 0 on socket 0 00:29:59.103 EAL: Detected lcore 4 as core 0 on socket 0 00:29:59.103 EAL: Detected lcore 5 as core 0 on socket 0 00:29:59.103 EAL: Detected lcore 6 as core 0 on socket 0 00:29:59.103 EAL: Detected lcore 7 as core 0 on socket 0 00:29:59.103 EAL: Detected lcore 8 as core 0 on socket 0 00:29:59.103 EAL: Detected lcore 9 as core 0 on socket 0 00:29:59.103 EAL: Maximum logical cores by configuration: 128 00:29:59.103 EAL: Detected CPU lcores: 10 00:29:59.103 EAL: Detected NUMA nodes: 1 00:29:59.103 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:29:59.103 EAL: Detected shared linkage of DPDK 00:29:59.103 EAL: No shared files mode enabled, IPC will be disabled 00:29:59.103 EAL: Selected IOVA mode 'PA' 00:29:59.361 EAL: Probing VFIO support... 00:29:59.361 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:29:59.361 EAL: VFIO modules not loaded, skipping VFIO support... 00:29:59.361 EAL: Ask a virtual area of 0x2e000 bytes 00:29:59.361 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:29:59.361 EAL: Setting up physically contiguous memory... 00:29:59.362 EAL: Setting maximum number of open files to 524288 00:29:59.362 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:29:59.362 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:29:59.362 EAL: Ask a virtual area of 0x61000 bytes 00:29:59.362 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:29:59.362 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:29:59.362 EAL: Ask a virtual area of 0x400000000 bytes 00:29:59.362 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:29:59.362 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:29:59.362 EAL: Ask a virtual area of 0x61000 bytes 00:29:59.362 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:29:59.362 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:29:59.362 EAL: Ask a virtual area of 0x400000000 bytes 00:29:59.362 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:29:59.362 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:29:59.362 EAL: Ask a virtual area of 0x61000 bytes 00:29:59.362 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:29:59.362 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:29:59.362 EAL: Ask a virtual area of 0x400000000 bytes 00:29:59.362 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:29:59.362 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:29:59.362 EAL: Ask a virtual area of 0x61000 bytes 00:29:59.362 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:29:59.362 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:29:59.362 EAL: Ask a virtual area of 0x400000000 bytes 00:29:59.362 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:29:59.362 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:29:59.362 EAL: Hugepages will be freed exactly as allocated. 00:29:59.362 EAL: No shared files mode enabled, IPC is disabled 00:29:59.362 EAL: No shared files mode enabled, IPC is disabled 00:29:59.362 EAL: TSC frequency is ~2100000 KHz 00:29:59.362 EAL: Main lcore 0 is ready (tid=7fad66ed8a40;cpuset=[0]) 00:29:59.362 EAL: Trying to obtain current memory policy. 00:29:59.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:59.362 EAL: Restoring previous memory policy: 0 00:29:59.362 EAL: request: mp_malloc_sync 00:29:59.362 EAL: No shared files mode enabled, IPC is disabled 00:29:59.362 EAL: Heap on socket 0 was expanded by 2MB 00:29:59.362 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:29:59.362 EAL: No PCI address specified using 'addr=' in: bus=pci 00:29:59.362 EAL: Mem event callback 'spdk:(nil)' registered 00:29:59.362 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:29:59.362 00:29:59.362 00:29:59.362 CUnit - A unit testing framework for C - Version 2.1-3 00:29:59.362 http://cunit.sourceforge.net/ 00:29:59.362 00:29:59.362 00:29:59.362 Suite: components_suite 00:29:59.928 Test: vtophys_malloc_test ...passed 00:29:59.928 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:29:59.928 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:59.928 EAL: Restoring previous memory policy: 4 00:29:59.928 EAL: Calling mem event callback 'spdk:(nil)' 00:29:59.928 EAL: request: mp_malloc_sync 00:29:59.928 EAL: No shared files mode enabled, IPC is disabled 00:29:59.928 EAL: Heap on socket 0 was expanded by 4MB 00:29:59.928 EAL: Calling mem event callback 'spdk:(nil)' 00:29:59.928 EAL: request: mp_malloc_sync 00:29:59.928 EAL: No shared files mode enabled, IPC is disabled 00:29:59.928 EAL: Heap on socket 0 was shrunk by 4MB 00:29:59.928 EAL: Trying to obtain current memory policy. 00:29:59.928 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:59.928 EAL: Restoring previous memory policy: 4 00:29:59.928 EAL: Calling mem event callback 'spdk:(nil)' 00:29:59.928 EAL: request: mp_malloc_sync 00:29:59.929 EAL: No shared files mode enabled, IPC is disabled 00:29:59.929 EAL: Heap on socket 0 was expanded by 6MB 00:29:59.929 EAL: Calling mem event callback 'spdk:(nil)' 00:29:59.929 EAL: request: mp_malloc_sync 00:29:59.929 EAL: No shared files mode enabled, IPC is disabled 00:29:59.929 EAL: Heap on socket 0 was shrunk by 6MB 00:29:59.929 EAL: Trying to obtain current memory policy. 00:29:59.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:59.929 EAL: Restoring previous memory policy: 4 00:29:59.929 EAL: Calling mem event callback 'spdk:(nil)' 00:29:59.929 EAL: request: mp_malloc_sync 00:29:59.929 EAL: No shared files mode enabled, IPC is disabled 00:29:59.929 EAL: Heap on socket 0 was expanded by 10MB 00:29:59.929 EAL: Calling mem event callback 'spdk:(nil)' 00:29:59.929 EAL: request: mp_malloc_sync 00:29:59.929 EAL: No shared files mode enabled, IPC is disabled 00:29:59.929 EAL: Heap on socket 0 was shrunk by 10MB 00:29:59.929 EAL: Trying to obtain current memory policy. 00:29:59.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:59.929 EAL: Restoring previous memory policy: 4 00:29:59.929 EAL: Calling mem event callback 'spdk:(nil)' 00:29:59.929 EAL: request: mp_malloc_sync 00:29:59.929 EAL: No shared files mode enabled, IPC is disabled 00:29:59.929 EAL: Heap on socket 0 was expanded by 18MB 00:30:00.187 EAL: Calling mem event callback 'spdk:(nil)' 00:30:00.187 EAL: request: mp_malloc_sync 00:30:00.187 EAL: No shared files mode enabled, IPC is disabled 00:30:00.187 EAL: Heap on socket 0 was shrunk by 18MB 00:30:00.187 EAL: Trying to obtain current memory policy. 00:30:00.187 EAL: Setting policy MPOL_PREFERRED for socket 0 00:30:00.187 EAL: Restoring previous memory policy: 4 00:30:00.187 EAL: Calling mem event callback 'spdk:(nil)' 00:30:00.187 EAL: request: mp_malloc_sync 00:30:00.187 EAL: No shared files mode enabled, IPC is disabled 00:30:00.187 EAL: Heap on socket 0 was expanded by 34MB 00:30:00.187 EAL: Calling mem event callback 'spdk:(nil)' 00:30:00.187 EAL: request: mp_malloc_sync 00:30:00.187 EAL: No shared files mode enabled, IPC is disabled 00:30:00.187 EAL: Heap on socket 0 was shrunk by 34MB 00:30:00.187 EAL: Trying to obtain current memory policy. 00:30:00.187 EAL: Setting policy MPOL_PREFERRED for socket 0 00:30:00.187 EAL: Restoring previous memory policy: 4 00:30:00.187 EAL: Calling mem event callback 'spdk:(nil)' 00:30:00.187 EAL: request: mp_malloc_sync 00:30:00.187 EAL: No shared files mode enabled, IPC is disabled 00:30:00.187 EAL: Heap on socket 0 was expanded by 66MB 00:30:00.445 EAL: Calling mem event callback 'spdk:(nil)' 00:30:00.445 EAL: request: mp_malloc_sync 00:30:00.445 EAL: No shared files mode enabled, IPC is disabled 00:30:00.445 EAL: Heap on socket 0 was shrunk by 66MB 00:30:00.445 EAL: Trying to obtain current memory policy. 00:30:00.445 EAL: Setting policy MPOL_PREFERRED for socket 0 00:30:00.445 EAL: Restoring previous memory policy: 4 00:30:00.445 EAL: Calling mem event callback 'spdk:(nil)' 00:30:00.445 EAL: request: mp_malloc_sync 00:30:00.445 EAL: No shared files mode enabled, IPC is disabled 00:30:00.445 EAL: Heap on socket 0 was expanded by 130MB 00:30:01.013 EAL: Calling mem event callback 'spdk:(nil)' 00:30:01.013 EAL: request: mp_malloc_sync 00:30:01.013 EAL: No shared files mode enabled, IPC is disabled 00:30:01.013 EAL: Heap on socket 0 was shrunk by 130MB 00:30:01.013 EAL: Trying to obtain current memory policy. 00:30:01.013 EAL: Setting policy MPOL_PREFERRED for socket 0 00:30:01.271 EAL: Restoring previous memory policy: 4 00:30:01.271 EAL: Calling mem event callback 'spdk:(nil)' 00:30:01.271 EAL: request: mp_malloc_sync 00:30:01.271 EAL: No shared files mode enabled, IPC is disabled 00:30:01.271 EAL: Heap on socket 0 was expanded by 258MB 00:30:01.839 EAL: Calling mem event callback 'spdk:(nil)' 00:30:01.839 EAL: request: mp_malloc_sync 00:30:01.839 EAL: No shared files mode enabled, IPC is disabled 00:30:01.839 EAL: Heap on socket 0 was shrunk by 258MB 00:30:02.408 EAL: Trying to obtain current memory policy. 00:30:02.408 EAL: Setting policy MPOL_PREFERRED for socket 0 00:30:02.666 EAL: Restoring previous memory policy: 4 00:30:02.666 EAL: Calling mem event callback 'spdk:(nil)' 00:30:02.666 EAL: request: mp_malloc_sync 00:30:02.666 EAL: No shared files mode enabled, IPC is disabled 00:30:02.666 EAL: Heap on socket 0 was expanded by 514MB 00:30:04.041 EAL: Calling mem event callback 'spdk:(nil)' 00:30:04.041 EAL: request: mp_malloc_sync 00:30:04.041 EAL: No shared files mode enabled, IPC is disabled 00:30:04.041 EAL: Heap on socket 0 was shrunk by 514MB 00:30:04.609 EAL: Trying to obtain current memory policy. 00:30:04.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:30:04.867 EAL: Restoring previous memory policy: 4 00:30:04.867 EAL: Calling mem event callback 'spdk:(nil)' 00:30:04.867 EAL: request: mp_malloc_sync 00:30:04.867 EAL: No shared files mode enabled, IPC is disabled 00:30:04.867 EAL: Heap on socket 0 was expanded by 1026MB 00:30:07.434 EAL: Calling mem event callback 'spdk:(nil)' 00:30:07.434 EAL: request: mp_malloc_sync 00:30:07.434 EAL: No shared files mode enabled, IPC is disabled 00:30:07.434 EAL: Heap on socket 0 was shrunk by 1026MB 00:30:09.336 passed 00:30:09.336 00:30:09.336 Run Summary: Type Total Ran Passed Failed Inactive 00:30:09.336 suites 1 1 n/a 0 0 00:30:09.336 tests 2 2 2 0 0 00:30:09.336 asserts 6496 6496 6496 0 n/a 00:30:09.336 00:30:09.336 Elapsed time = 9.600 seconds 00:30:09.336 EAL: Calling mem event callback 'spdk:(nil)' 00:30:09.336 EAL: request: mp_malloc_sync 00:30:09.336 EAL: No shared files mode enabled, IPC is disabled 00:30:09.336 EAL: Heap on socket 0 was shrunk by 2MB 00:30:09.336 EAL: No shared files mode enabled, IPC is disabled 00:30:09.336 EAL: No shared files mode enabled, IPC is disabled 00:30:09.336 EAL: No shared files mode enabled, IPC is disabled 00:30:09.336 00:30:09.336 real 0m9.991s 00:30:09.336 user 0m8.791s 00:30:09.336 sys 0m1.009s 00:30:09.336 08:59:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:09.336 08:59:11 -- common/autotest_common.sh@10 -- # set +x 00:30:09.336 ************************************ 00:30:09.336 END TEST env_vtophys 00:30:09.336 ************************************ 00:30:09.336 08:59:11 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:30:09.336 08:59:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:09.336 08:59:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:09.336 08:59:11 -- common/autotest_common.sh@10 -- # set +x 00:30:09.336 ************************************ 00:30:09.336 START TEST env_pci 00:30:09.336 ************************************ 00:30:09.336 08:59:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:30:09.336 00:30:09.336 00:30:09.336 CUnit - A unit testing framework for C - Version 2.1-3 00:30:09.336 http://cunit.sourceforge.net/ 00:30:09.336 00:30:09.336 00:30:09.336 Suite: pci 00:30:09.336 Test: pci_hook ...[2024-04-18 08:59:11.264167] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 61341 has claimed it 00:30:09.336 EAL: Cannot find device (10000:00:01.0) 00:30:09.336 EAL: Failed to attach device on primary process 00:30:09.336 passed 00:30:09.336 00:30:09.336 Run Summary: Type Total Ran Passed Failed Inactive 00:30:09.336 suites 1 1 n/a 0 0 00:30:09.336 tests 1 1 1 0 0 00:30:09.336 asserts 25 25 25 0 n/a 00:30:09.336 00:30:09.336 Elapsed time = 0.011 seconds 00:30:09.336 00:30:09.336 real 0m0.105s 00:30:09.336 user 0m0.047s 00:30:09.336 sys 0m0.055s 00:30:09.336 08:59:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:09.336 08:59:11 -- common/autotest_common.sh@10 -- # set +x 00:30:09.336 ************************************ 00:30:09.336 END TEST env_pci 00:30:09.336 ************************************ 00:30:09.336 08:59:11 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:30:09.336 08:59:11 -- env/env.sh@15 -- # uname 00:30:09.336 08:59:11 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:30:09.336 08:59:11 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:30:09.336 08:59:11 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:30:09.336 08:59:11 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:30:09.336 08:59:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:09.336 08:59:11 -- common/autotest_common.sh@10 -- # set +x 00:30:09.636 ************************************ 00:30:09.636 START TEST env_dpdk_post_init 00:30:09.636 ************************************ 00:30:09.636 08:59:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:30:09.636 EAL: Detected CPU lcores: 10 00:30:09.636 EAL: Detected NUMA nodes: 1 00:30:09.636 EAL: Detected shared linkage of DPDK 00:30:09.636 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:30:09.636 EAL: Selected IOVA mode 'PA' 00:30:09.636 TELEMETRY: No legacy callbacks, legacy socket not created 00:30:09.636 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:30:09.636 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:30:09.636 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:30:09.636 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:30:09.896 Starting DPDK initialization... 00:30:09.896 Starting SPDK post initialization... 00:30:09.896 SPDK NVMe probe 00:30:09.896 Attaching to 0000:00:10.0 00:30:09.896 Attaching to 0000:00:11.0 00:30:09.896 Attaching to 0000:00:12.0 00:30:09.896 Attaching to 0000:00:13.0 00:30:09.896 Attached to 0000:00:10.0 00:30:09.896 Attached to 0000:00:11.0 00:30:09.896 Attached to 0000:00:13.0 00:30:09.896 Attached to 0000:00:12.0 00:30:09.896 Cleaning up... 00:30:09.896 00:30:09.896 real 0m0.339s 00:30:09.896 user 0m0.113s 00:30:09.896 sys 0m0.125s 00:30:09.896 08:59:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:09.896 08:59:11 -- common/autotest_common.sh@10 -- # set +x 00:30:09.896 ************************************ 00:30:09.896 END TEST env_dpdk_post_init 00:30:09.896 ************************************ 00:30:09.896 08:59:11 -- env/env.sh@26 -- # uname 00:30:09.896 08:59:11 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:30:09.896 08:59:11 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:30:09.896 08:59:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:09.896 08:59:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:09.896 08:59:11 -- common/autotest_common.sh@10 -- # set +x 00:30:09.896 ************************************ 00:30:09.896 START TEST env_mem_callbacks 00:30:09.896 ************************************ 00:30:09.896 08:59:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:30:10.155 EAL: Detected CPU lcores: 10 00:30:10.155 EAL: Detected NUMA nodes: 1 00:30:10.155 EAL: Detected shared linkage of DPDK 00:30:10.155 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:30:10.155 EAL: Selected IOVA mode 'PA' 00:30:10.155 TELEMETRY: No legacy callbacks, legacy socket not created 00:30:10.155 00:30:10.155 00:30:10.155 CUnit - A unit testing framework for C - Version 2.1-3 00:30:10.155 http://cunit.sourceforge.net/ 00:30:10.155 00:30:10.155 00:30:10.155 Suite: memory 00:30:10.155 Test: test ... 00:30:10.155 register 0x200000200000 2097152 00:30:10.155 malloc 3145728 00:30:10.155 register 0x200000400000 4194304 00:30:10.155 buf 0x2000004fffc0 len 3145728 PASSED 00:30:10.155 malloc 64 00:30:10.155 buf 0x2000004ffec0 len 64 PASSED 00:30:10.155 malloc 4194304 00:30:10.155 register 0x200000800000 6291456 00:30:10.155 buf 0x2000009fffc0 len 4194304 PASSED 00:30:10.155 free 0x2000004fffc0 3145728 00:30:10.155 free 0x2000004ffec0 64 00:30:10.155 unregister 0x200000400000 4194304 PASSED 00:30:10.155 free 0x2000009fffc0 4194304 00:30:10.155 unregister 0x200000800000 6291456 PASSED 00:30:10.155 malloc 8388608 00:30:10.155 register 0x200000400000 10485760 00:30:10.155 buf 0x2000005fffc0 len 8388608 PASSED 00:30:10.155 free 0x2000005fffc0 8388608 00:30:10.155 unregister 0x200000400000 10485760 PASSED 00:30:10.155 passed 00:30:10.155 00:30:10.155 Run Summary: Type Total Ran Passed Failed Inactive 00:30:10.155 suites 1 1 n/a 0 0 00:30:10.155 tests 1 1 1 0 0 00:30:10.155 asserts 15 15 15 0 n/a 00:30:10.155 00:30:10.155 Elapsed time = 0.083 seconds 00:30:10.413 00:30:10.413 real 0m0.331s 00:30:10.413 user 0m0.127s 00:30:10.413 sys 0m0.098s 00:30:10.413 08:59:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:10.413 08:59:12 -- common/autotest_common.sh@10 -- # set +x 00:30:10.413 ************************************ 00:30:10.413 END TEST env_mem_callbacks 00:30:10.413 ************************************ 00:30:10.413 ************************************ 00:30:10.413 END TEST env 00:30:10.413 ************************************ 00:30:10.413 00:30:10.413 real 0m11.847s 00:30:10.413 user 0m9.599s 00:30:10.413 sys 0m1.765s 00:30:10.413 08:59:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:10.413 08:59:12 -- common/autotest_common.sh@10 -- # set +x 00:30:10.413 08:59:12 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:30:10.413 08:59:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:10.413 08:59:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:10.413 08:59:12 -- common/autotest_common.sh@10 -- # set +x 00:30:10.413 ************************************ 00:30:10.413 START TEST rpc 00:30:10.413 ************************************ 00:30:10.413 08:59:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:30:10.673 * Looking for test storage... 00:30:10.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:30:10.673 08:59:12 -- rpc/rpc.sh@65 -- # spdk_pid=61480 00:30:10.673 08:59:12 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:30:10.673 08:59:12 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:30:10.673 08:59:12 -- rpc/rpc.sh@67 -- # waitforlisten 61480 00:30:10.673 08:59:12 -- common/autotest_common.sh@817 -- # '[' -z 61480 ']' 00:30:10.673 08:59:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.673 08:59:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:10.673 08:59:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.673 08:59:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:10.673 08:59:12 -- common/autotest_common.sh@10 -- # set +x 00:30:10.673 [2024-04-18 08:59:12.717435] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:30:10.673 [2024-04-18 08:59:12.717831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61480 ] 00:30:10.931 [2024-04-18 08:59:12.909690] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.196 [2024-04-18 08:59:13.222493] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:30:11.196 [2024-04-18 08:59:13.222728] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 61480' to capture a snapshot of events at runtime. 00:30:11.196 [2024-04-18 08:59:13.222877] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.196 [2024-04-18 08:59:13.222938] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.196 [2024-04-18 08:59:13.222988] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid61480 for offline analysis/debug. 00:30:11.196 [2024-04-18 08:59:13.223086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.625 08:59:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:12.625 08:59:14 -- common/autotest_common.sh@850 -- # return 0 00:30:12.625 08:59:14 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:30:12.625 08:59:14 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:30:12.625 08:59:14 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:30:12.625 08:59:14 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:30:12.625 08:59:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:12.625 08:59:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:12.625 08:59:14 -- common/autotest_common.sh@10 -- # set +x 00:30:12.625 ************************************ 00:30:12.625 START TEST rpc_integrity 00:30:12.625 ************************************ 00:30:12.625 08:59:14 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:30:12.625 08:59:14 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:12.625 08:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:12.625 08:59:14 -- common/autotest_common.sh@10 -- # set +x 00:30:12.625 08:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:12.625 08:59:14 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:30:12.625 08:59:14 -- rpc/rpc.sh@13 -- # jq length 00:30:12.625 08:59:14 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:30:12.625 08:59:14 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:30:12.625 08:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:12.625 08:59:14 -- common/autotest_common.sh@10 -- # set +x 00:30:12.625 08:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:12.625 08:59:14 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:30:12.625 08:59:14 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:30:12.625 08:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:12.625 08:59:14 -- common/autotest_common.sh@10 -- # set +x 00:30:12.625 08:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:12.625 08:59:14 -- rpc/rpc.sh@16 -- # bdevs='[ 00:30:12.625 { 00:30:12.625 "name": "Malloc0", 00:30:12.625 "aliases": [ 00:30:12.625 "5c19ddcc-afda-4275-a8f5-960c6b33acf4" 00:30:12.625 ], 00:30:12.625 "product_name": "Malloc disk", 00:30:12.625 "block_size": 512, 00:30:12.625 "num_blocks": 16384, 00:30:12.625 "uuid": "5c19ddcc-afda-4275-a8f5-960c6b33acf4", 00:30:12.625 "assigned_rate_limits": { 00:30:12.625 "rw_ios_per_sec": 0, 00:30:12.625 "rw_mbytes_per_sec": 0, 00:30:12.625 "r_mbytes_per_sec": 0, 00:30:12.625 "w_mbytes_per_sec": 0 00:30:12.625 }, 00:30:12.625 "claimed": false, 00:30:12.625 "zoned": false, 00:30:12.625 "supported_io_types": { 00:30:12.625 "read": true, 00:30:12.625 "write": true, 00:30:12.625 "unmap": true, 00:30:12.625 "write_zeroes": true, 00:30:12.625 "flush": true, 00:30:12.625 "reset": true, 00:30:12.625 "compare": false, 00:30:12.625 "compare_and_write": false, 00:30:12.625 "abort": true, 00:30:12.625 "nvme_admin": false, 00:30:12.625 "nvme_io": false 00:30:12.625 }, 00:30:12.625 "memory_domains": [ 00:30:12.625 { 00:30:12.625 "dma_device_id": "system", 00:30:12.625 "dma_device_type": 1 00:30:12.625 }, 00:30:12.625 { 00:30:12.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:12.625 "dma_device_type": 2 00:30:12.625 } 00:30:12.625 ], 00:30:12.625 "driver_specific": {} 00:30:12.625 } 00:30:12.625 ]' 00:30:12.625 08:59:14 -- rpc/rpc.sh@17 -- # jq length 00:30:12.625 08:59:14 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:30:12.625 08:59:14 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:30:12.625 08:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:12.625 08:59:14 -- common/autotest_common.sh@10 -- # set +x 00:30:12.625 [2024-04-18 08:59:14.559256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:30:12.625 [2024-04-18 08:59:14.560599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:12.625 [2024-04-18 08:59:14.560694] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:30:12.625 [2024-04-18 08:59:14.560759] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:12.625 [2024-04-18 08:59:14.564278] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:12.625 [2024-04-18 08:59:14.564500] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:30:12.625 Passthru0 00:30:12.625 08:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:12.625 08:59:14 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:30:12.626 08:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:12.626 08:59:14 -- common/autotest_common.sh@10 -- # set +x 00:30:12.626 08:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:12.626 08:59:14 -- rpc/rpc.sh@20 -- # bdevs='[ 00:30:12.626 { 00:30:12.626 "name": "Malloc0", 00:30:12.626 "aliases": [ 00:30:12.626 "5c19ddcc-afda-4275-a8f5-960c6b33acf4" 00:30:12.626 ], 00:30:12.626 "product_name": "Malloc disk", 00:30:12.626 "block_size": 512, 00:30:12.626 "num_blocks": 16384, 00:30:12.626 "uuid": "5c19ddcc-afda-4275-a8f5-960c6b33acf4", 00:30:12.626 "assigned_rate_limits": { 00:30:12.626 "rw_ios_per_sec": 0, 00:30:12.626 "rw_mbytes_per_sec": 0, 00:30:12.626 "r_mbytes_per_sec": 0, 00:30:12.626 "w_mbytes_per_sec": 0 00:30:12.626 }, 00:30:12.626 "claimed": true, 00:30:12.626 "claim_type": "exclusive_write", 00:30:12.626 "zoned": false, 00:30:12.626 "supported_io_types": { 00:30:12.626 "read": true, 00:30:12.626 "write": true, 00:30:12.626 "unmap": true, 00:30:12.626 "write_zeroes": true, 00:30:12.626 "flush": true, 00:30:12.626 "reset": true, 00:30:12.626 "compare": false, 00:30:12.626 "compare_and_write": false, 00:30:12.626 "abort": true, 00:30:12.626 "nvme_admin": false, 00:30:12.626 "nvme_io": false 00:30:12.626 }, 00:30:12.626 "memory_domains": [ 00:30:12.626 { 00:30:12.626 "dma_device_id": "system", 00:30:12.626 "dma_device_type": 1 00:30:12.626 }, 00:30:12.626 { 00:30:12.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:12.626 "dma_device_type": 2 00:30:12.626 } 00:30:12.626 ], 00:30:12.626 "driver_specific": {} 00:30:12.626 }, 00:30:12.626 { 00:30:12.626 "name": "Passthru0", 00:30:12.626 "aliases": [ 00:30:12.626 "80dfdd52-8c9c-5187-93ba-f24d4b96daba" 00:30:12.626 ], 00:30:12.626 "product_name": "passthru", 00:30:12.626 "block_size": 512, 00:30:12.626 "num_blocks": 16384, 00:30:12.626 "uuid": "80dfdd52-8c9c-5187-93ba-f24d4b96daba", 00:30:12.626 "assigned_rate_limits": { 00:30:12.626 "rw_ios_per_sec": 0, 00:30:12.626 "rw_mbytes_per_sec": 0, 00:30:12.626 "r_mbytes_per_sec": 0, 00:30:12.626 "w_mbytes_per_sec": 0 00:30:12.626 }, 00:30:12.626 "claimed": false, 00:30:12.626 "zoned": false, 00:30:12.626 "supported_io_types": { 00:30:12.626 "read": true, 00:30:12.626 "write": true, 00:30:12.626 "unmap": true, 00:30:12.626 "write_zeroes": true, 00:30:12.626 "flush": true, 00:30:12.626 "reset": true, 00:30:12.626 "compare": false, 00:30:12.626 "compare_and_write": false, 00:30:12.626 "abort": true, 00:30:12.626 "nvme_admin": false, 00:30:12.626 "nvme_io": false 00:30:12.626 }, 00:30:12.626 "memory_domains": [ 00:30:12.626 { 00:30:12.626 "dma_device_id": "system", 00:30:12.626 "dma_device_type": 1 00:30:12.626 }, 00:30:12.626 { 00:30:12.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:12.626 "dma_device_type": 2 00:30:12.626 } 00:30:12.626 ], 00:30:12.626 "driver_specific": { 00:30:12.626 "passthru": { 00:30:12.626 "name": "Passthru0", 00:30:12.626 "base_bdev_name": "Malloc0" 00:30:12.626 } 00:30:12.626 } 00:30:12.626 } 00:30:12.626 ]' 00:30:12.626 08:59:14 -- rpc/rpc.sh@21 -- # jq length 00:30:12.626 08:59:14 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:30:12.626 08:59:14 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:30:12.626 08:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:12.626 08:59:14 -- common/autotest_common.sh@10 -- # set +x 00:30:12.626 08:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:12.626 08:59:14 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:12.626 08:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:12.626 08:59:14 -- common/autotest_common.sh@10 -- # set +x 00:30:12.626 08:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:12.626 08:59:14 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:30:12.626 08:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:12.626 08:59:14 -- common/autotest_common.sh@10 -- # set +x 00:30:12.884 08:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:12.884 08:59:14 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:30:12.884 08:59:14 -- rpc/rpc.sh@26 -- # jq length 00:30:12.884 08:59:14 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:30:12.885 00:30:12.885 real 0m0.369s 00:30:12.885 user 0m0.171s 00:30:12.885 sys 0m0.056s 00:30:12.885 08:59:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:12.885 08:59:14 -- common/autotest_common.sh@10 -- # set +x 00:30:12.885 ************************************ 00:30:12.885 END TEST rpc_integrity 00:30:12.885 ************************************ 00:30:12.885 08:59:14 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:30:12.885 08:59:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:12.885 08:59:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:12.885 08:59:14 -- common/autotest_common.sh@10 -- # set +x 00:30:12.885 ************************************ 00:30:12.885 START TEST rpc_plugins 00:30:12.885 ************************************ 00:30:12.885 08:59:14 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:30:12.885 08:59:14 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:30:12.885 08:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:12.885 08:59:14 -- common/autotest_common.sh@10 -- # set +x 00:30:12.885 08:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:12.885 08:59:14 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:30:12.885 08:59:14 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:30:12.885 08:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:12.885 08:59:14 -- common/autotest_common.sh@10 -- # set +x 00:30:12.885 08:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:12.885 08:59:14 -- rpc/rpc.sh@31 -- # bdevs='[ 00:30:12.885 { 00:30:12.885 "name": "Malloc1", 00:30:12.885 "aliases": [ 00:30:12.885 "1462010d-2b11-459d-b3c4-e86112f48365" 00:30:12.885 ], 00:30:12.885 "product_name": "Malloc disk", 00:30:12.885 "block_size": 4096, 00:30:12.885 "num_blocks": 256, 00:30:12.885 "uuid": "1462010d-2b11-459d-b3c4-e86112f48365", 00:30:12.885 "assigned_rate_limits": { 00:30:12.885 "rw_ios_per_sec": 0, 00:30:12.885 "rw_mbytes_per_sec": 0, 00:30:12.885 "r_mbytes_per_sec": 0, 00:30:12.885 "w_mbytes_per_sec": 0 00:30:12.885 }, 00:30:12.885 "claimed": false, 00:30:12.885 "zoned": false, 00:30:12.885 "supported_io_types": { 00:30:12.885 "read": true, 00:30:12.885 "write": true, 00:30:12.885 "unmap": true, 00:30:12.885 "write_zeroes": true, 00:30:12.885 "flush": true, 00:30:12.885 "reset": true, 00:30:12.885 "compare": false, 00:30:12.885 "compare_and_write": false, 00:30:12.885 "abort": true, 00:30:12.885 "nvme_admin": false, 00:30:12.885 "nvme_io": false 00:30:12.885 }, 00:30:12.885 "memory_domains": [ 00:30:12.885 { 00:30:12.885 "dma_device_id": "system", 00:30:12.885 "dma_device_type": 1 00:30:12.885 }, 00:30:12.885 { 00:30:12.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:12.885 "dma_device_type": 2 00:30:12.885 } 00:30:12.885 ], 00:30:12.885 "driver_specific": {} 00:30:12.885 } 00:30:12.885 ]' 00:30:12.885 08:59:14 -- rpc/rpc.sh@32 -- # jq length 00:30:13.143 08:59:14 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:30:13.143 08:59:14 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:30:13.143 08:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:13.143 08:59:14 -- common/autotest_common.sh@10 -- # set +x 00:30:13.143 08:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.143 08:59:15 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:30:13.143 08:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:13.143 08:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:13.143 08:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.143 08:59:15 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:30:13.143 08:59:15 -- rpc/rpc.sh@36 -- # jq length 00:30:13.143 08:59:15 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:30:13.143 00:30:13.143 real 0m0.163s 00:30:13.143 user 0m0.091s 00:30:13.143 sys 0m0.026s 00:30:13.143 08:59:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:13.143 08:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:13.143 ************************************ 00:30:13.143 END TEST rpc_plugins 00:30:13.143 ************************************ 00:30:13.143 08:59:15 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:30:13.143 08:59:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:13.143 08:59:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:13.143 08:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:13.143 ************************************ 00:30:13.143 START TEST rpc_trace_cmd_test 00:30:13.143 ************************************ 00:30:13.143 08:59:15 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:30:13.143 08:59:15 -- rpc/rpc.sh@40 -- # local info 00:30:13.143 08:59:15 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:30:13.143 08:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:13.143 08:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:13.143 08:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.401 08:59:15 -- rpc/rpc.sh@42 -- # info='{ 00:30:13.401 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid61480", 00:30:13.401 "tpoint_group_mask": "0x8", 00:30:13.401 "iscsi_conn": { 00:30:13.401 "mask": "0x2", 00:30:13.401 "tpoint_mask": "0x0" 00:30:13.401 }, 00:30:13.401 "scsi": { 00:30:13.401 "mask": "0x4", 00:30:13.401 "tpoint_mask": "0x0" 00:30:13.401 }, 00:30:13.401 "bdev": { 00:30:13.401 "mask": "0x8", 00:30:13.401 "tpoint_mask": "0xffffffffffffffff" 00:30:13.401 }, 00:30:13.401 "nvmf_rdma": { 00:30:13.401 "mask": "0x10", 00:30:13.401 "tpoint_mask": "0x0" 00:30:13.401 }, 00:30:13.401 "nvmf_tcp": { 00:30:13.401 "mask": "0x20", 00:30:13.401 "tpoint_mask": "0x0" 00:30:13.401 }, 00:30:13.401 "ftl": { 00:30:13.401 "mask": "0x40", 00:30:13.401 "tpoint_mask": "0x0" 00:30:13.401 }, 00:30:13.401 "blobfs": { 00:30:13.401 "mask": "0x80", 00:30:13.401 "tpoint_mask": "0x0" 00:30:13.401 }, 00:30:13.401 "dsa": { 00:30:13.401 "mask": "0x200", 00:30:13.401 "tpoint_mask": "0x0" 00:30:13.401 }, 00:30:13.401 "thread": { 00:30:13.401 "mask": "0x400", 00:30:13.401 "tpoint_mask": "0x0" 00:30:13.401 }, 00:30:13.401 "nvme_pcie": { 00:30:13.401 "mask": "0x800", 00:30:13.401 "tpoint_mask": "0x0" 00:30:13.401 }, 00:30:13.401 "iaa": { 00:30:13.401 "mask": "0x1000", 00:30:13.401 "tpoint_mask": "0x0" 00:30:13.401 }, 00:30:13.401 "nvme_tcp": { 00:30:13.401 "mask": "0x2000", 00:30:13.401 "tpoint_mask": "0x0" 00:30:13.401 }, 00:30:13.401 "bdev_nvme": { 00:30:13.401 "mask": "0x4000", 00:30:13.401 "tpoint_mask": "0x0" 00:30:13.401 }, 00:30:13.401 "sock": { 00:30:13.401 "mask": "0x8000", 00:30:13.401 "tpoint_mask": "0x0" 00:30:13.401 } 00:30:13.401 }' 00:30:13.401 08:59:15 -- rpc/rpc.sh@43 -- # jq length 00:30:13.401 08:59:15 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:30:13.401 08:59:15 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:30:13.401 08:59:15 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:30:13.402 08:59:15 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:30:13.402 08:59:15 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:30:13.402 08:59:15 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:30:13.402 08:59:15 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:30:13.402 08:59:15 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:30:13.402 08:59:15 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:30:13.402 00:30:13.402 real 0m0.244s 00:30:13.402 user 0m0.189s 00:30:13.402 sys 0m0.042s 00:30:13.402 08:59:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:13.402 08:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:13.402 ************************************ 00:30:13.402 END TEST rpc_trace_cmd_test 00:30:13.402 ************************************ 00:30:13.660 08:59:15 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:30:13.660 08:59:15 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:30:13.660 08:59:15 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:30:13.660 08:59:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:13.660 08:59:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:13.660 08:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:13.660 ************************************ 00:30:13.660 START TEST rpc_daemon_integrity 00:30:13.660 ************************************ 00:30:13.660 08:59:15 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:30:13.660 08:59:15 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:13.660 08:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:13.660 08:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:13.660 08:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.660 08:59:15 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:30:13.660 08:59:15 -- rpc/rpc.sh@13 -- # jq length 00:30:13.660 08:59:15 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:30:13.660 08:59:15 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:30:13.660 08:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:13.660 08:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:13.660 08:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.660 08:59:15 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:30:13.660 08:59:15 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:30:13.660 08:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:13.660 08:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:13.660 08:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.660 08:59:15 -- rpc/rpc.sh@16 -- # bdevs='[ 00:30:13.660 { 00:30:13.660 "name": "Malloc2", 00:30:13.660 "aliases": [ 00:30:13.660 "58f19460-c77d-4b2d-aa8d-1c08e1291096" 00:30:13.660 ], 00:30:13.660 "product_name": "Malloc disk", 00:30:13.660 "block_size": 512, 00:30:13.660 "num_blocks": 16384, 00:30:13.660 "uuid": "58f19460-c77d-4b2d-aa8d-1c08e1291096", 00:30:13.660 "assigned_rate_limits": { 00:30:13.660 "rw_ios_per_sec": 0, 00:30:13.660 "rw_mbytes_per_sec": 0, 00:30:13.660 "r_mbytes_per_sec": 0, 00:30:13.660 "w_mbytes_per_sec": 0 00:30:13.660 }, 00:30:13.660 "claimed": false, 00:30:13.660 "zoned": false, 00:30:13.660 "supported_io_types": { 00:30:13.660 "read": true, 00:30:13.660 "write": true, 00:30:13.660 "unmap": true, 00:30:13.660 "write_zeroes": true, 00:30:13.660 "flush": true, 00:30:13.660 "reset": true, 00:30:13.660 "compare": false, 00:30:13.660 "compare_and_write": false, 00:30:13.660 "abort": true, 00:30:13.660 "nvme_admin": false, 00:30:13.660 "nvme_io": false 00:30:13.660 }, 00:30:13.660 "memory_domains": [ 00:30:13.660 { 00:30:13.660 "dma_device_id": "system", 00:30:13.660 "dma_device_type": 1 00:30:13.660 }, 00:30:13.660 { 00:30:13.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:13.660 "dma_device_type": 2 00:30:13.660 } 00:30:13.660 ], 00:30:13.660 "driver_specific": {} 00:30:13.660 } 00:30:13.660 ]' 00:30:13.660 08:59:15 -- rpc/rpc.sh@17 -- # jq length 00:30:13.660 08:59:15 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:30:13.660 08:59:15 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:30:13.660 08:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:13.660 08:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:13.660 [2024-04-18 08:59:15.741674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:30:13.660 [2024-04-18 08:59:15.741875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:13.660 [2024-04-18 08:59:15.741938] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:30:13.660 [2024-04-18 08:59:15.742045] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:13.660 [2024-04-18 08:59:15.744608] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:13.660 [2024-04-18 08:59:15.744760] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:30:13.660 Passthru0 00:30:13.660 08:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.660 08:59:15 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:30:13.660 08:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:13.660 08:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:13.918 08:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.918 08:59:15 -- rpc/rpc.sh@20 -- # bdevs='[ 00:30:13.918 { 00:30:13.918 "name": "Malloc2", 00:30:13.918 "aliases": [ 00:30:13.918 "58f19460-c77d-4b2d-aa8d-1c08e1291096" 00:30:13.918 ], 00:30:13.919 "product_name": "Malloc disk", 00:30:13.919 "block_size": 512, 00:30:13.919 "num_blocks": 16384, 00:30:13.919 "uuid": "58f19460-c77d-4b2d-aa8d-1c08e1291096", 00:30:13.919 "assigned_rate_limits": { 00:30:13.919 "rw_ios_per_sec": 0, 00:30:13.919 "rw_mbytes_per_sec": 0, 00:30:13.919 "r_mbytes_per_sec": 0, 00:30:13.919 "w_mbytes_per_sec": 0 00:30:13.919 }, 00:30:13.919 "claimed": true, 00:30:13.919 "claim_type": "exclusive_write", 00:30:13.919 "zoned": false, 00:30:13.919 "supported_io_types": { 00:30:13.919 "read": true, 00:30:13.919 "write": true, 00:30:13.919 "unmap": true, 00:30:13.919 "write_zeroes": true, 00:30:13.919 "flush": true, 00:30:13.919 "reset": true, 00:30:13.919 "compare": false, 00:30:13.919 "compare_and_write": false, 00:30:13.919 "abort": true, 00:30:13.919 "nvme_admin": false, 00:30:13.919 "nvme_io": false 00:30:13.919 }, 00:30:13.919 "memory_domains": [ 00:30:13.919 { 00:30:13.919 "dma_device_id": "system", 00:30:13.919 "dma_device_type": 1 00:30:13.919 }, 00:30:13.919 { 00:30:13.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:13.919 "dma_device_type": 2 00:30:13.919 } 00:30:13.919 ], 00:30:13.919 "driver_specific": {} 00:30:13.919 }, 00:30:13.919 { 00:30:13.919 "name": "Passthru0", 00:30:13.919 "aliases": [ 00:30:13.919 "d7553272-4ffd-5692-ba6e-39db3bbd7a1e" 00:30:13.919 ], 00:30:13.919 "product_name": "passthru", 00:30:13.919 "block_size": 512, 00:30:13.919 "num_blocks": 16384, 00:30:13.919 "uuid": "d7553272-4ffd-5692-ba6e-39db3bbd7a1e", 00:30:13.919 "assigned_rate_limits": { 00:30:13.919 "rw_ios_per_sec": 0, 00:30:13.919 "rw_mbytes_per_sec": 0, 00:30:13.919 "r_mbytes_per_sec": 0, 00:30:13.919 "w_mbytes_per_sec": 0 00:30:13.919 }, 00:30:13.919 "claimed": false, 00:30:13.919 "zoned": false, 00:30:13.919 "supported_io_types": { 00:30:13.919 "read": true, 00:30:13.919 "write": true, 00:30:13.919 "unmap": true, 00:30:13.919 "write_zeroes": true, 00:30:13.919 "flush": true, 00:30:13.919 "reset": true, 00:30:13.919 "compare": false, 00:30:13.919 "compare_and_write": false, 00:30:13.919 "abort": true, 00:30:13.919 "nvme_admin": false, 00:30:13.919 "nvme_io": false 00:30:13.919 }, 00:30:13.919 "memory_domains": [ 00:30:13.919 { 00:30:13.919 "dma_device_id": "system", 00:30:13.919 "dma_device_type": 1 00:30:13.919 }, 00:30:13.919 { 00:30:13.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:13.919 "dma_device_type": 2 00:30:13.919 } 00:30:13.919 ], 00:30:13.919 "driver_specific": { 00:30:13.919 "passthru": { 00:30:13.919 "name": "Passthru0", 00:30:13.919 "base_bdev_name": "Malloc2" 00:30:13.919 } 00:30:13.919 } 00:30:13.919 } 00:30:13.919 ]' 00:30:13.919 08:59:15 -- rpc/rpc.sh@21 -- # jq length 00:30:13.919 08:59:15 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:30:13.919 08:59:15 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:30:13.919 08:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:13.919 08:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:13.919 08:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.919 08:59:15 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:30:13.919 08:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:13.919 08:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:13.919 08:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.919 08:59:15 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:30:13.919 08:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:13.919 08:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:13.919 08:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.919 08:59:15 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:30:13.919 08:59:15 -- rpc/rpc.sh@26 -- # jq length 00:30:13.919 08:59:15 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:30:13.919 00:30:13.919 real 0m0.333s 00:30:13.919 user 0m0.165s 00:30:13.919 sys 0m0.060s 00:30:13.919 08:59:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:13.919 08:59:15 -- common/autotest_common.sh@10 -- # set +x 00:30:13.919 ************************************ 00:30:13.919 END TEST rpc_daemon_integrity 00:30:13.919 ************************************ 00:30:13.919 08:59:15 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:13.919 08:59:15 -- rpc/rpc.sh@84 -- # killprocess 61480 00:30:13.919 08:59:15 -- common/autotest_common.sh@936 -- # '[' -z 61480 ']' 00:30:13.919 08:59:15 -- common/autotest_common.sh@940 -- # kill -0 61480 00:30:13.919 08:59:15 -- common/autotest_common.sh@941 -- # uname 00:30:13.919 08:59:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:13.919 08:59:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61480 00:30:13.919 08:59:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:13.919 08:59:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:13.919 08:59:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61480' 00:30:13.919 killing process with pid 61480 00:30:13.919 08:59:16 -- common/autotest_common.sh@955 -- # kill 61480 00:30:13.919 08:59:16 -- common/autotest_common.sh@960 -- # wait 61480 00:30:17.211 ************************************ 00:30:17.211 END TEST rpc 00:30:17.211 ************************************ 00:30:17.211 00:30:17.211 real 0m6.301s 00:30:17.211 user 0m6.922s 00:30:17.211 sys 0m1.092s 00:30:17.211 08:59:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:17.211 08:59:18 -- common/autotest_common.sh@10 -- # set +x 00:30:17.211 08:59:18 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:30:17.211 08:59:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:17.211 08:59:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:17.211 08:59:18 -- common/autotest_common.sh@10 -- # set +x 00:30:17.211 ************************************ 00:30:17.211 START TEST skip_rpc 00:30:17.211 ************************************ 00:30:17.211 08:59:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:30:17.211 * Looking for test storage... 00:30:17.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:30:17.211 08:59:18 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:30:17.211 08:59:18 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:30:17.211 08:59:18 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:30:17.211 08:59:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:17.211 08:59:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:17.211 08:59:18 -- common/autotest_common.sh@10 -- # set +x 00:30:17.211 ************************************ 00:30:17.211 START TEST skip_rpc 00:30:17.211 ************************************ 00:30:17.211 08:59:19 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:30:17.211 08:59:19 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=61744 00:30:17.211 08:59:19 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:30:17.211 08:59:19 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:30:17.211 08:59:19 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:30:17.211 [2024-04-18 08:59:19.212334] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:30:17.211 [2024-04-18 08:59:19.212748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61744 ] 00:30:17.469 [2024-04-18 08:59:19.402591] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.732 [2024-04-18 08:59:19.729567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.038 08:59:24 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:30:23.038 08:59:24 -- common/autotest_common.sh@638 -- # local es=0 00:30:23.038 08:59:24 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:30:23.038 08:59:24 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:30:23.038 08:59:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:23.038 08:59:24 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:30:23.038 08:59:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:23.038 08:59:24 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:30:23.038 08:59:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:23.038 08:59:24 -- common/autotest_common.sh@10 -- # set +x 00:30:23.038 08:59:24 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:30:23.038 08:59:24 -- common/autotest_common.sh@641 -- # es=1 00:30:23.038 08:59:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:23.038 08:59:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:23.038 08:59:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:23.038 08:59:24 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:30:23.038 08:59:24 -- rpc/skip_rpc.sh@23 -- # killprocess 61744 00:30:23.038 08:59:24 -- common/autotest_common.sh@936 -- # '[' -z 61744 ']' 00:30:23.038 08:59:24 -- common/autotest_common.sh@940 -- # kill -0 61744 00:30:23.038 08:59:24 -- common/autotest_common.sh@941 -- # uname 00:30:23.038 08:59:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:23.038 08:59:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61744 00:30:23.038 08:59:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:23.038 08:59:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:23.038 08:59:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61744' 00:30:23.038 killing process with pid 61744 00:30:23.038 08:59:24 -- common/autotest_common.sh@955 -- # kill 61744 00:30:23.038 08:59:24 -- common/autotest_common.sh@960 -- # wait 61744 00:30:24.938 00:30:24.938 real 0m7.874s 00:30:24.938 user 0m7.337s 00:30:24.938 sys 0m0.420s 00:30:24.938 08:59:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:24.938 08:59:26 -- common/autotest_common.sh@10 -- # set +x 00:30:24.938 ************************************ 00:30:24.938 END TEST skip_rpc 00:30:24.938 ************************************ 00:30:24.938 08:59:27 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:30:24.938 08:59:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:24.938 08:59:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:24.938 08:59:27 -- common/autotest_common.sh@10 -- # set +x 00:30:25.197 ************************************ 00:30:25.197 START TEST skip_rpc_with_json 00:30:25.197 ************************************ 00:30:25.197 08:59:27 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:30:25.197 08:59:27 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:30:25.197 08:59:27 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=61858 00:30:25.197 08:59:27 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:30:25.197 08:59:27 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:30:25.197 08:59:27 -- rpc/skip_rpc.sh@31 -- # waitforlisten 61858 00:30:25.197 08:59:27 -- common/autotest_common.sh@817 -- # '[' -z 61858 ']' 00:30:25.197 08:59:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.197 08:59:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:25.197 08:59:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.197 08:59:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:25.197 08:59:27 -- common/autotest_common.sh@10 -- # set +x 00:30:25.197 [2024-04-18 08:59:27.220313] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:30:25.197 [2024-04-18 08:59:27.220728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61858 ] 00:30:25.456 [2024-04-18 08:59:27.404215] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.714 [2024-04-18 08:59:27.673069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.090 08:59:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:27.090 08:59:28 -- common/autotest_common.sh@850 -- # return 0 00:30:27.090 08:59:28 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:30:27.090 08:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.090 08:59:28 -- common/autotest_common.sh@10 -- # set +x 00:30:27.090 [2024-04-18 08:59:28.776294] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:30:27.090 request: 00:30:27.090 { 00:30:27.090 "trtype": "tcp", 00:30:27.090 "method": "nvmf_get_transports", 00:30:27.090 "req_id": 1 00:30:27.090 } 00:30:27.090 Got JSON-RPC error response 00:30:27.090 response: 00:30:27.090 { 00:30:27.090 "code": -19, 00:30:27.090 "message": "No such device" 00:30:27.090 } 00:30:27.090 08:59:28 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:30:27.090 08:59:28 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:30:27.090 08:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.090 08:59:28 -- common/autotest_common.sh@10 -- # set +x 00:30:27.090 [2024-04-18 08:59:28.788346] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.090 08:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.090 08:59:28 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:30:27.090 08:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.090 08:59:28 -- common/autotest_common.sh@10 -- # set +x 00:30:27.090 08:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.090 08:59:28 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:30:27.090 { 00:30:27.090 "subsystems": [ 00:30:27.090 { 00:30:27.090 "subsystem": "keyring", 00:30:27.090 "config": [] 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "subsystem": "iobuf", 00:30:27.090 "config": [ 00:30:27.090 { 00:30:27.090 "method": "iobuf_set_options", 00:30:27.090 "params": { 00:30:27.090 "small_pool_count": 8192, 00:30:27.090 "large_pool_count": 1024, 00:30:27.090 "small_bufsize": 8192, 00:30:27.090 "large_bufsize": 135168 00:30:27.090 } 00:30:27.090 } 00:30:27.090 ] 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "subsystem": "sock", 00:30:27.090 "config": [ 00:30:27.090 { 00:30:27.090 "method": "sock_impl_set_options", 00:30:27.090 "params": { 00:30:27.090 "impl_name": "posix", 00:30:27.090 "recv_buf_size": 2097152, 00:30:27.090 "send_buf_size": 2097152, 00:30:27.090 "enable_recv_pipe": true, 00:30:27.090 "enable_quickack": false, 00:30:27.090 "enable_placement_id": 0, 00:30:27.090 "enable_zerocopy_send_server": true, 00:30:27.090 "enable_zerocopy_send_client": false, 00:30:27.090 "zerocopy_threshold": 0, 00:30:27.090 "tls_version": 0, 00:30:27.090 "enable_ktls": false 00:30:27.090 } 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "method": "sock_impl_set_options", 00:30:27.090 "params": { 00:30:27.090 "impl_name": "ssl", 00:30:27.090 "recv_buf_size": 4096, 00:30:27.090 "send_buf_size": 4096, 00:30:27.090 "enable_recv_pipe": true, 00:30:27.090 "enable_quickack": false, 00:30:27.090 "enable_placement_id": 0, 00:30:27.090 "enable_zerocopy_send_server": true, 00:30:27.090 "enable_zerocopy_send_client": false, 00:30:27.090 "zerocopy_threshold": 0, 00:30:27.090 "tls_version": 0, 00:30:27.090 "enable_ktls": false 00:30:27.090 } 00:30:27.090 } 00:30:27.090 ] 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "subsystem": "vmd", 00:30:27.090 "config": [] 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "subsystem": "accel", 00:30:27.090 "config": [ 00:30:27.090 { 00:30:27.090 "method": "accel_set_options", 00:30:27.090 "params": { 00:30:27.090 "small_cache_size": 128, 00:30:27.090 "large_cache_size": 16, 00:30:27.090 "task_count": 2048, 00:30:27.090 "sequence_count": 2048, 00:30:27.090 "buf_count": 2048 00:30:27.090 } 00:30:27.090 } 00:30:27.090 ] 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "subsystem": "bdev", 00:30:27.090 "config": [ 00:30:27.090 { 00:30:27.090 "method": "bdev_set_options", 00:30:27.090 "params": { 00:30:27.090 "bdev_io_pool_size": 65535, 00:30:27.090 "bdev_io_cache_size": 256, 00:30:27.090 "bdev_auto_examine": true, 00:30:27.090 "iobuf_small_cache_size": 128, 00:30:27.090 "iobuf_large_cache_size": 16 00:30:27.090 } 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "method": "bdev_raid_set_options", 00:30:27.090 "params": { 00:30:27.090 "process_window_size_kb": 1024 00:30:27.090 } 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "method": "bdev_iscsi_set_options", 00:30:27.090 "params": { 00:30:27.090 "timeout_sec": 30 00:30:27.090 } 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "method": "bdev_nvme_set_options", 00:30:27.090 "params": { 00:30:27.090 "action_on_timeout": "none", 00:30:27.090 "timeout_us": 0, 00:30:27.090 "timeout_admin_us": 0, 00:30:27.090 "keep_alive_timeout_ms": 10000, 00:30:27.090 "arbitration_burst": 0, 00:30:27.090 "low_priority_weight": 0, 00:30:27.090 "medium_priority_weight": 0, 00:30:27.090 "high_priority_weight": 0, 00:30:27.090 "nvme_adminq_poll_period_us": 10000, 00:30:27.090 "nvme_ioq_poll_period_us": 0, 00:30:27.090 "io_queue_requests": 0, 00:30:27.090 "delay_cmd_submit": true, 00:30:27.090 "transport_retry_count": 4, 00:30:27.090 "bdev_retry_count": 3, 00:30:27.090 "transport_ack_timeout": 0, 00:30:27.090 "ctrlr_loss_timeout_sec": 0, 00:30:27.090 "reconnect_delay_sec": 0, 00:30:27.090 "fast_io_fail_timeout_sec": 0, 00:30:27.090 "disable_auto_failback": false, 00:30:27.090 "generate_uuids": false, 00:30:27.090 "transport_tos": 0, 00:30:27.090 "nvme_error_stat": false, 00:30:27.090 "rdma_srq_size": 0, 00:30:27.090 "io_path_stat": false, 00:30:27.090 "allow_accel_sequence": false, 00:30:27.090 "rdma_max_cq_size": 0, 00:30:27.090 "rdma_cm_event_timeout_ms": 0, 00:30:27.090 "dhchap_digests": [ 00:30:27.090 "sha256", 00:30:27.090 "sha384", 00:30:27.090 "sha512" 00:30:27.090 ], 00:30:27.090 "dhchap_dhgroups": [ 00:30:27.090 "null", 00:30:27.090 "ffdhe2048", 00:30:27.090 "ffdhe3072", 00:30:27.090 "ffdhe4096", 00:30:27.090 "ffdhe6144", 00:30:27.090 "ffdhe8192" 00:30:27.090 ] 00:30:27.090 } 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "method": "bdev_nvme_set_hotplug", 00:30:27.090 "params": { 00:30:27.090 "period_us": 100000, 00:30:27.090 "enable": false 00:30:27.090 } 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "method": "bdev_wait_for_examine" 00:30:27.090 } 00:30:27.090 ] 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "subsystem": "scsi", 00:30:27.090 "config": null 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "subsystem": "scheduler", 00:30:27.090 "config": [ 00:30:27.090 { 00:30:27.090 "method": "framework_set_scheduler", 00:30:27.090 "params": { 00:30:27.090 "name": "static" 00:30:27.090 } 00:30:27.090 } 00:30:27.090 ] 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "subsystem": "vhost_scsi", 00:30:27.090 "config": [] 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "subsystem": "vhost_blk", 00:30:27.090 "config": [] 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "subsystem": "ublk", 00:30:27.090 "config": [] 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "subsystem": "nbd", 00:30:27.090 "config": [] 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "subsystem": "nvmf", 00:30:27.090 "config": [ 00:30:27.090 { 00:30:27.090 "method": "nvmf_set_config", 00:30:27.090 "params": { 00:30:27.090 "discovery_filter": "match_any", 00:30:27.090 "admin_cmd_passthru": { 00:30:27.090 "identify_ctrlr": false 00:30:27.090 } 00:30:27.090 } 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "method": "nvmf_set_max_subsystems", 00:30:27.090 "params": { 00:30:27.090 "max_subsystems": 1024 00:30:27.090 } 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "method": "nvmf_set_crdt", 00:30:27.090 "params": { 00:30:27.090 "crdt1": 0, 00:30:27.090 "crdt2": 0, 00:30:27.090 "crdt3": 0 00:30:27.090 } 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "method": "nvmf_create_transport", 00:30:27.090 "params": { 00:30:27.090 "trtype": "TCP", 00:30:27.090 "max_queue_depth": 128, 00:30:27.090 "max_io_qpairs_per_ctrlr": 127, 00:30:27.090 "in_capsule_data_size": 4096, 00:30:27.090 "max_io_size": 131072, 00:30:27.090 "io_unit_size": 131072, 00:30:27.090 "max_aq_depth": 128, 00:30:27.090 "num_shared_buffers": 511, 00:30:27.090 "buf_cache_size": 4294967295, 00:30:27.090 "dif_insert_or_strip": false, 00:30:27.090 "zcopy": false, 00:30:27.090 "c2h_success": true, 00:30:27.090 "sock_priority": 0, 00:30:27.090 "abort_timeout_sec": 1, 00:30:27.090 "ack_timeout": 0 00:30:27.090 } 00:30:27.090 } 00:30:27.090 ] 00:30:27.090 }, 00:30:27.090 { 00:30:27.090 "subsystem": "iscsi", 00:30:27.090 "config": [ 00:30:27.090 { 00:30:27.090 "method": "iscsi_set_options", 00:30:27.090 "params": { 00:30:27.090 "node_base": "iqn.2016-06.io.spdk", 00:30:27.090 "max_sessions": 128, 00:30:27.090 "max_connections_per_session": 2, 00:30:27.090 "max_queue_depth": 64, 00:30:27.090 "default_time2wait": 2, 00:30:27.090 "default_time2retain": 20, 00:30:27.090 "first_burst_length": 8192, 00:30:27.090 "immediate_data": true, 00:30:27.090 "allow_duplicated_isid": false, 00:30:27.091 "error_recovery_level": 0, 00:30:27.091 "nop_timeout": 60, 00:30:27.091 "nop_in_interval": 30, 00:30:27.091 "disable_chap": false, 00:30:27.091 "require_chap": false, 00:30:27.091 "mutual_chap": false, 00:30:27.091 "chap_group": 0, 00:30:27.091 "max_large_datain_per_connection": 64, 00:30:27.091 "max_r2t_per_connection": 4, 00:30:27.091 "pdu_pool_size": 36864, 00:30:27.091 "immediate_data_pool_size": 16384, 00:30:27.091 "data_out_pool_size": 2048 00:30:27.091 } 00:30:27.091 } 00:30:27.091 ] 00:30:27.091 } 00:30:27.091 ] 00:30:27.091 } 00:30:27.091 08:59:28 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:30:27.091 08:59:28 -- rpc/skip_rpc.sh@40 -- # killprocess 61858 00:30:27.091 08:59:28 -- common/autotest_common.sh@936 -- # '[' -z 61858 ']' 00:30:27.091 08:59:28 -- common/autotest_common.sh@940 -- # kill -0 61858 00:30:27.091 08:59:28 -- common/autotest_common.sh@941 -- # uname 00:30:27.091 08:59:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:27.091 08:59:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61858 00:30:27.091 08:59:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:27.091 08:59:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:27.091 08:59:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61858' 00:30:27.091 killing process with pid 61858 00:30:27.091 08:59:28 -- common/autotest_common.sh@955 -- # kill 61858 00:30:27.091 08:59:29 -- common/autotest_common.sh@960 -- # wait 61858 00:30:30.373 08:59:31 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61919 00:30:30.373 08:59:31 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:30:30.373 08:59:31 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:30:35.638 08:59:36 -- rpc/skip_rpc.sh@50 -- # killprocess 61919 00:30:35.638 08:59:36 -- common/autotest_common.sh@936 -- # '[' -z 61919 ']' 00:30:35.638 08:59:36 -- common/autotest_common.sh@940 -- # kill -0 61919 00:30:35.638 08:59:36 -- common/autotest_common.sh@941 -- # uname 00:30:35.638 08:59:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:35.638 08:59:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61919 00:30:35.638 killing process with pid 61919 00:30:35.638 08:59:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:35.638 08:59:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:35.638 08:59:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61919' 00:30:35.638 08:59:36 -- common/autotest_common.sh@955 -- # kill 61919 00:30:35.638 08:59:36 -- common/autotest_common.sh@960 -- # wait 61919 00:30:38.167 08:59:39 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:30:38.167 08:59:39 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:30:38.167 00:30:38.167 real 0m12.562s 00:30:38.167 user 0m11.962s 00:30:38.167 sys 0m0.946s 00:30:38.167 08:59:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:38.167 ************************************ 00:30:38.167 08:59:39 -- common/autotest_common.sh@10 -- # set +x 00:30:38.167 END TEST skip_rpc_with_json 00:30:38.167 ************************************ 00:30:38.167 08:59:39 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:30:38.167 08:59:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:38.167 08:59:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:38.167 08:59:39 -- common/autotest_common.sh@10 -- # set +x 00:30:38.167 ************************************ 00:30:38.167 START TEST skip_rpc_with_delay 00:30:38.167 ************************************ 00:30:38.167 08:59:39 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:30:38.167 08:59:39 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:30:38.167 08:59:39 -- common/autotest_common.sh@638 -- # local es=0 00:30:38.167 08:59:39 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:30:38.167 08:59:39 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:38.167 08:59:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:38.167 08:59:39 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:38.167 08:59:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:38.167 08:59:39 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:38.167 08:59:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:38.167 08:59:39 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:38.167 08:59:39 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:30:38.167 08:59:39 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:30:38.167 [2024-04-18 08:59:39.919648] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:30:38.167 [2024-04-18 08:59:39.920200] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:30:38.167 08:59:39 -- common/autotest_common.sh@641 -- # es=1 00:30:38.167 08:59:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:38.167 08:59:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:38.167 08:59:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:38.167 00:30:38.167 real 0m0.209s 00:30:38.167 user 0m0.124s 00:30:38.167 sys 0m0.078s 00:30:38.167 08:59:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:38.167 08:59:39 -- common/autotest_common.sh@10 -- # set +x 00:30:38.167 ************************************ 00:30:38.167 END TEST skip_rpc_with_delay 00:30:38.167 ************************************ 00:30:38.167 08:59:40 -- rpc/skip_rpc.sh@77 -- # uname 00:30:38.167 08:59:40 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:30:38.167 08:59:40 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:30:38.167 08:59:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:38.167 08:59:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:38.167 08:59:40 -- common/autotest_common.sh@10 -- # set +x 00:30:38.167 ************************************ 00:30:38.167 START TEST exit_on_failed_rpc_init 00:30:38.167 ************************************ 00:30:38.167 08:59:40 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:30:38.167 08:59:40 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62067 00:30:38.167 08:59:40 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:30:38.167 08:59:40 -- rpc/skip_rpc.sh@63 -- # waitforlisten 62067 00:30:38.167 08:59:40 -- common/autotest_common.sh@817 -- # '[' -z 62067 ']' 00:30:38.167 08:59:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.167 08:59:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:38.167 08:59:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.167 08:59:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:38.167 08:59:40 -- common/autotest_common.sh@10 -- # set +x 00:30:38.167 [2024-04-18 08:59:40.235005] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:30:38.168 [2024-04-18 08:59:40.235368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62067 ] 00:30:38.425 [2024-04-18 08:59:40.410731] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.684 [2024-04-18 08:59:40.740008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.125 08:59:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:40.125 08:59:41 -- common/autotest_common.sh@850 -- # return 0 00:30:40.125 08:59:41 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:30:40.125 08:59:41 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:30:40.125 08:59:41 -- common/autotest_common.sh@638 -- # local es=0 00:30:40.125 08:59:41 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:30:40.125 08:59:41 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:40.125 08:59:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:40.125 08:59:41 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:40.125 08:59:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:40.125 08:59:41 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:40.125 08:59:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:40.125 08:59:41 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:40.125 08:59:41 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:30:40.126 08:59:41 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:30:40.126 [2024-04-18 08:59:41.960703] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:30:40.126 [2024-04-18 08:59:41.961113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62096 ] 00:30:40.126 [2024-04-18 08:59:42.147920] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.384 [2024-04-18 08:59:42.440100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.384 [2024-04-18 08:59:42.440592] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:40.384 [2024-04-18 08:59:42.440789] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:30:40.384 [2024-04-18 08:59:42.440931] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:40.952 08:59:43 -- common/autotest_common.sh@641 -- # es=234 00:30:40.952 08:59:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:40.952 08:59:43 -- common/autotest_common.sh@650 -- # es=106 00:30:40.952 08:59:43 -- common/autotest_common.sh@651 -- # case "$es" in 00:30:40.952 08:59:43 -- common/autotest_common.sh@658 -- # es=1 00:30:40.952 08:59:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:40.952 08:59:43 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:40.952 08:59:43 -- rpc/skip_rpc.sh@70 -- # killprocess 62067 00:30:40.952 08:59:43 -- common/autotest_common.sh@936 -- # '[' -z 62067 ']' 00:30:40.952 08:59:43 -- common/autotest_common.sh@940 -- # kill -0 62067 00:30:40.952 08:59:43 -- common/autotest_common.sh@941 -- # uname 00:30:40.952 08:59:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:40.952 08:59:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62067 00:30:40.952 killing process with pid 62067 00:30:40.952 08:59:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:40.952 08:59:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:40.952 08:59:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62067' 00:30:40.952 08:59:43 -- common/autotest_common.sh@955 -- # kill 62067 00:30:40.952 08:59:43 -- common/autotest_common.sh@960 -- # wait 62067 00:30:44.250 00:30:44.250 real 0m5.670s 00:30:44.250 user 0m6.549s 00:30:44.250 sys 0m0.672s 00:30:44.250 ************************************ 00:30:44.250 END TEST exit_on_failed_rpc_init 00:30:44.250 ************************************ 00:30:44.250 08:59:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:44.250 08:59:45 -- common/autotest_common.sh@10 -- # set +x 00:30:44.250 08:59:45 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:30:44.250 00:30:44.250 real 0m26.960s 00:30:44.250 user 0m26.180s 00:30:44.250 sys 0m2.495s 00:30:44.250 08:59:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:44.250 08:59:45 -- common/autotest_common.sh@10 -- # set +x 00:30:44.250 ************************************ 00:30:44.250 END TEST skip_rpc 00:30:44.250 ************************************ 00:30:44.250 08:59:45 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:30:44.250 08:59:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:44.250 08:59:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:44.250 08:59:45 -- common/autotest_common.sh@10 -- # set +x 00:30:44.250 ************************************ 00:30:44.250 START TEST rpc_client 00:30:44.250 ************************************ 00:30:44.250 08:59:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:30:44.250 * Looking for test storage... 00:30:44.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:30:44.250 08:59:46 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:30:44.250 OK 00:30:44.250 08:59:46 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:30:44.250 ************************************ 00:30:44.250 END TEST rpc_client 00:30:44.250 ************************************ 00:30:44.250 00:30:44.250 real 0m0.161s 00:30:44.250 user 0m0.058s 00:30:44.250 sys 0m0.106s 00:30:44.250 08:59:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:44.250 08:59:46 -- common/autotest_common.sh@10 -- # set +x 00:30:44.250 08:59:46 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:30:44.250 08:59:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:44.250 08:59:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:44.250 08:59:46 -- common/autotest_common.sh@10 -- # set +x 00:30:44.250 ************************************ 00:30:44.250 START TEST json_config 00:30:44.250 ************************************ 00:30:44.250 08:59:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:30:44.508 08:59:46 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:44.508 08:59:46 -- nvmf/common.sh@7 -- # uname -s 00:30:44.508 08:59:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.508 08:59:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.508 08:59:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.508 08:59:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.508 08:59:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.508 08:59:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.508 08:59:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.508 08:59:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.508 08:59:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.508 08:59:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.508 08:59:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9d44ceec-ac23-4e8e-aa1f-66e7850cb740 00:30:44.508 08:59:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=9d44ceec-ac23-4e8e-aa1f-66e7850cb740 00:30:44.508 08:59:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.508 08:59:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.508 08:59:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:44.508 08:59:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.508 08:59:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:44.508 08:59:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.509 08:59:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.509 08:59:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.509 08:59:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.509 08:59:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.509 08:59:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.509 08:59:46 -- paths/export.sh@5 -- # export PATH 00:30:44.509 08:59:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.509 08:59:46 -- nvmf/common.sh@47 -- # : 0 00:30:44.509 08:59:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:44.509 08:59:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:44.509 08:59:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.509 08:59:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.509 08:59:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.509 08:59:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:44.509 08:59:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:44.509 08:59:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:44.509 08:59:46 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:30:44.509 08:59:46 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:30:44.509 08:59:46 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:30:44.509 08:59:46 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:30:44.509 08:59:46 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:30:44.509 08:59:46 -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:30:44.509 WARNING: No tests are enabled so not running JSON configuration tests 00:30:44.509 08:59:46 -- json_config/json_config.sh@28 -- # exit 0 00:30:44.509 00:30:44.509 real 0m0.101s 00:30:44.509 user 0m0.044s 00:30:44.509 sys 0m0.052s 00:30:44.509 08:59:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:44.509 08:59:46 -- common/autotest_common.sh@10 -- # set +x 00:30:44.509 ************************************ 00:30:44.509 END TEST json_config 00:30:44.509 ************************************ 00:30:44.509 08:59:46 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:30:44.509 08:59:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:44.509 08:59:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:44.509 08:59:46 -- common/autotest_common.sh@10 -- # set +x 00:30:44.509 ************************************ 00:30:44.509 START TEST json_config_extra_key 00:30:44.509 ************************************ 00:30:44.509 08:59:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:30:44.509 08:59:46 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:44.509 08:59:46 -- nvmf/common.sh@7 -- # uname -s 00:30:44.509 08:59:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.509 08:59:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.509 08:59:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.509 08:59:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.509 08:59:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.509 08:59:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.509 08:59:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.509 08:59:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.509 08:59:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.509 08:59:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.509 08:59:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9d44ceec-ac23-4e8e-aa1f-66e7850cb740 00:30:44.768 08:59:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=9d44ceec-ac23-4e8e-aa1f-66e7850cb740 00:30:44.768 08:59:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.768 08:59:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.768 08:59:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:44.768 08:59:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.768 08:59:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:44.768 08:59:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.768 08:59:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.768 08:59:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.768 08:59:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.768 08:59:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.768 08:59:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.768 08:59:46 -- paths/export.sh@5 -- # export PATH 00:30:44.768 08:59:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.768 08:59:46 -- nvmf/common.sh@47 -- # : 0 00:30:44.768 08:59:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:44.768 08:59:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:44.768 08:59:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.768 08:59:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.768 08:59:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.768 08:59:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:44.768 08:59:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:44.768 08:59:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:44.768 08:59:46 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:30:44.768 08:59:46 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:30:44.768 08:59:46 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:30:44.768 08:59:46 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:30:44.768 08:59:46 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:30:44.768 08:59:46 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:30:44.768 08:59:46 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:30:44.768 08:59:46 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:30:44.768 08:59:46 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:30:44.768 08:59:46 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:30:44.768 INFO: launching applications... 00:30:44.768 08:59:46 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:30:44.768 08:59:46 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:30:44.768 08:59:46 -- json_config/common.sh@9 -- # local app=target 00:30:44.768 08:59:46 -- json_config/common.sh@10 -- # shift 00:30:44.768 08:59:46 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:30:44.768 08:59:46 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:30:44.768 08:59:46 -- json_config/common.sh@15 -- # local app_extra_params= 00:30:44.768 08:59:46 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:30:44.768 08:59:46 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:30:44.768 Waiting for target to run... 00:30:44.768 08:59:46 -- json_config/common.sh@22 -- # app_pid["$app"]=62305 00:30:44.768 08:59:46 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:30:44.768 08:59:46 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:30:44.768 08:59:46 -- json_config/common.sh@25 -- # waitforlisten 62305 /var/tmp/spdk_tgt.sock 00:30:44.768 08:59:46 -- common/autotest_common.sh@817 -- # '[' -z 62305 ']' 00:30:44.768 08:59:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:30:44.768 08:59:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:44.768 08:59:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:30:44.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:30:44.768 08:59:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:44.768 08:59:46 -- common/autotest_common.sh@10 -- # set +x 00:30:44.768 [2024-04-18 08:59:46.742016] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:30:44.768 [2024-04-18 08:59:46.742165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62305 ] 00:30:45.334 [2024-04-18 08:59:47.145485] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.593 [2024-04-18 08:59:47.448814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.527 08:59:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:46.527 00:30:46.528 08:59:48 -- common/autotest_common.sh@850 -- # return 0 00:30:46.528 08:59:48 -- json_config/common.sh@26 -- # echo '' 00:30:46.528 INFO: shutting down applications... 00:30:46.528 08:59:48 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:30:46.528 08:59:48 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:30:46.528 08:59:48 -- json_config/common.sh@31 -- # local app=target 00:30:46.528 08:59:48 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:30:46.528 08:59:48 -- json_config/common.sh@35 -- # [[ -n 62305 ]] 00:30:46.528 08:59:48 -- json_config/common.sh@38 -- # kill -SIGINT 62305 00:30:46.528 08:59:48 -- json_config/common.sh@40 -- # (( i = 0 )) 00:30:46.528 08:59:48 -- json_config/common.sh@40 -- # (( i < 30 )) 00:30:46.528 08:59:48 -- json_config/common.sh@41 -- # kill -0 62305 00:30:46.528 08:59:48 -- json_config/common.sh@45 -- # sleep 0.5 00:30:47.095 08:59:48 -- json_config/common.sh@40 -- # (( i++ )) 00:30:47.095 08:59:48 -- json_config/common.sh@40 -- # (( i < 30 )) 00:30:47.095 08:59:48 -- json_config/common.sh@41 -- # kill -0 62305 00:30:47.095 08:59:48 -- json_config/common.sh@45 -- # sleep 0.5 00:30:47.354 08:59:49 -- json_config/common.sh@40 -- # (( i++ )) 00:30:47.354 08:59:49 -- json_config/common.sh@40 -- # (( i < 30 )) 00:30:47.354 08:59:49 -- json_config/common.sh@41 -- # kill -0 62305 00:30:47.354 08:59:49 -- json_config/common.sh@45 -- # sleep 0.5 00:30:47.920 08:59:49 -- json_config/common.sh@40 -- # (( i++ )) 00:30:47.920 08:59:49 -- json_config/common.sh@40 -- # (( i < 30 )) 00:30:47.920 08:59:49 -- json_config/common.sh@41 -- # kill -0 62305 00:30:47.920 08:59:49 -- json_config/common.sh@45 -- # sleep 0.5 00:30:48.487 08:59:50 -- json_config/common.sh@40 -- # (( i++ )) 00:30:48.487 08:59:50 -- json_config/common.sh@40 -- # (( i < 30 )) 00:30:48.487 08:59:50 -- json_config/common.sh@41 -- # kill -0 62305 00:30:48.487 08:59:50 -- json_config/common.sh@45 -- # sleep 0.5 00:30:49.054 08:59:50 -- json_config/common.sh@40 -- # (( i++ )) 00:30:49.054 08:59:50 -- json_config/common.sh@40 -- # (( i < 30 )) 00:30:49.054 08:59:50 -- json_config/common.sh@41 -- # kill -0 62305 00:30:49.055 08:59:50 -- json_config/common.sh@45 -- # sleep 0.5 00:30:49.636 08:59:51 -- json_config/common.sh@40 -- # (( i++ )) 00:30:49.636 08:59:51 -- json_config/common.sh@40 -- # (( i < 30 )) 00:30:49.636 08:59:51 -- json_config/common.sh@41 -- # kill -0 62305 00:30:49.636 08:59:51 -- json_config/common.sh@45 -- # sleep 0.5 00:30:49.893 08:59:51 -- json_config/common.sh@40 -- # (( i++ )) 00:30:49.893 08:59:51 -- json_config/common.sh@40 -- # (( i < 30 )) 00:30:49.893 08:59:51 -- json_config/common.sh@41 -- # kill -0 62305 00:30:49.893 08:59:51 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:30:49.893 08:59:51 -- json_config/common.sh@43 -- # break 00:30:49.893 08:59:51 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:30:49.893 SPDK target shutdown done 00:30:49.893 08:59:51 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:30:49.893 Success 00:30:49.893 08:59:51 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:30:49.893 00:30:49.893 real 0m5.408s 00:30:49.893 user 0m4.865s 00:30:49.893 sys 0m0.589s 00:30:49.893 08:59:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:49.893 08:59:51 -- common/autotest_common.sh@10 -- # set +x 00:30:49.893 ************************************ 00:30:49.893 END TEST json_config_extra_key 00:30:49.893 ************************************ 00:30:49.893 08:59:51 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:30:49.893 08:59:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:49.893 08:59:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:49.893 08:59:51 -- common/autotest_common.sh@10 -- # set +x 00:30:50.151 ************************************ 00:30:50.151 START TEST alias_rpc 00:30:50.151 ************************************ 00:30:50.151 08:59:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:30:50.151 * Looking for test storage... 00:30:50.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:30:50.151 08:59:52 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:30:50.151 08:59:52 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=62422 00:30:50.151 08:59:52 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 62422 00:30:50.151 08:59:52 -- common/autotest_common.sh@817 -- # '[' -z 62422 ']' 00:30:50.151 08:59:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.151 08:59:52 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:50.151 08:59:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:50.151 08:59:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.151 08:59:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:50.151 08:59:52 -- common/autotest_common.sh@10 -- # set +x 00:30:50.408 [2024-04-18 08:59:52.297326] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:30:50.409 [2024-04-18 08:59:52.297507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62422 ] 00:30:50.409 [2024-04-18 08:59:52.487496] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.971 [2024-04-18 08:59:52.832830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.904 08:59:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:51.904 08:59:53 -- common/autotest_common.sh@850 -- # return 0 00:30:51.904 08:59:53 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:30:52.162 08:59:54 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 62422 00:30:52.162 08:59:54 -- common/autotest_common.sh@936 -- # '[' -z 62422 ']' 00:30:52.162 08:59:54 -- common/autotest_common.sh@940 -- # kill -0 62422 00:30:52.162 08:59:54 -- common/autotest_common.sh@941 -- # uname 00:30:52.162 08:59:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:52.162 08:59:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62422 00:30:52.162 08:59:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:52.162 08:59:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:52.162 08:59:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62422' 00:30:52.162 killing process with pid 62422 00:30:52.162 08:59:54 -- common/autotest_common.sh@955 -- # kill 62422 00:30:52.162 08:59:54 -- common/autotest_common.sh@960 -- # wait 62422 00:30:55.445 00:30:55.445 real 0m5.007s 00:30:55.445 user 0m5.006s 00:30:55.445 sys 0m0.610s 00:30:55.445 ************************************ 00:30:55.445 END TEST alias_rpc 00:30:55.445 ************************************ 00:30:55.445 08:59:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:55.445 08:59:57 -- common/autotest_common.sh@10 -- # set +x 00:30:55.445 08:59:57 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:30:55.445 08:59:57 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:30:55.445 08:59:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:55.445 08:59:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:55.445 08:59:57 -- common/autotest_common.sh@10 -- # set +x 00:30:55.445 ************************************ 00:30:55.445 START TEST spdkcli_tcp 00:30:55.445 ************************************ 00:30:55.445 08:59:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:30:55.445 * Looking for test storage... 00:30:55.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:30:55.445 08:59:57 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:30:55.445 08:59:57 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:30:55.445 08:59:57 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:30:55.445 08:59:57 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:30:55.445 08:59:57 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:30:55.445 08:59:57 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:55.445 08:59:57 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:30:55.445 08:59:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:55.445 08:59:57 -- common/autotest_common.sh@10 -- # set +x 00:30:55.445 08:59:57 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=62537 00:30:55.445 08:59:57 -- spdkcli/tcp.sh@27 -- # waitforlisten 62537 00:30:55.445 08:59:57 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:30:55.445 08:59:57 -- common/autotest_common.sh@817 -- # '[' -z 62537 ']' 00:30:55.446 08:59:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.446 08:59:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:55.446 08:59:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.446 08:59:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:55.446 08:59:57 -- common/autotest_common.sh@10 -- # set +x 00:30:55.446 [2024-04-18 08:59:57.438284] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:30:55.446 [2024-04-18 08:59:57.438464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62537 ] 00:30:55.704 [2024-04-18 08:59:57.628322] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:55.964 [2024-04-18 08:59:57.895336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.964 [2024-04-18 08:59:57.895365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.900 08:59:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:56.900 08:59:58 -- common/autotest_common.sh@850 -- # return 0 00:30:56.900 08:59:58 -- spdkcli/tcp.sh@31 -- # socat_pid=62565 00:30:56.900 08:59:58 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:30:56.900 08:59:58 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:30:57.158 [ 00:30:57.158 "bdev_malloc_delete", 00:30:57.158 "bdev_malloc_create", 00:30:57.158 "bdev_null_resize", 00:30:57.158 "bdev_null_delete", 00:30:57.158 "bdev_null_create", 00:30:57.158 "bdev_nvme_cuse_unregister", 00:30:57.158 "bdev_nvme_cuse_register", 00:30:57.158 "bdev_opal_new_user", 00:30:57.158 "bdev_opal_set_lock_state", 00:30:57.158 "bdev_opal_delete", 00:30:57.158 "bdev_opal_get_info", 00:30:57.158 "bdev_opal_create", 00:30:57.158 "bdev_nvme_opal_revert", 00:30:57.158 "bdev_nvme_opal_init", 00:30:57.158 "bdev_nvme_send_cmd", 00:30:57.158 "bdev_nvme_get_path_iostat", 00:30:57.158 "bdev_nvme_get_mdns_discovery_info", 00:30:57.158 "bdev_nvme_stop_mdns_discovery", 00:30:57.158 "bdev_nvme_start_mdns_discovery", 00:30:57.158 "bdev_nvme_set_multipath_policy", 00:30:57.158 "bdev_nvme_set_preferred_path", 00:30:57.158 "bdev_nvme_get_io_paths", 00:30:57.158 "bdev_nvme_remove_error_injection", 00:30:57.158 "bdev_nvme_add_error_injection", 00:30:57.158 "bdev_nvme_get_discovery_info", 00:30:57.158 "bdev_nvme_stop_discovery", 00:30:57.158 "bdev_nvme_start_discovery", 00:30:57.158 "bdev_nvme_get_controller_health_info", 00:30:57.158 "bdev_nvme_disable_controller", 00:30:57.158 "bdev_nvme_enable_controller", 00:30:57.158 "bdev_nvme_reset_controller", 00:30:57.158 "bdev_nvme_get_transport_statistics", 00:30:57.158 "bdev_nvme_apply_firmware", 00:30:57.158 "bdev_nvme_detach_controller", 00:30:57.158 "bdev_nvme_get_controllers", 00:30:57.158 "bdev_nvme_attach_controller", 00:30:57.158 "bdev_nvme_set_hotplug", 00:30:57.158 "bdev_nvme_set_options", 00:30:57.158 "bdev_passthru_delete", 00:30:57.158 "bdev_passthru_create", 00:30:57.158 "bdev_lvol_grow_lvstore", 00:30:57.158 "bdev_lvol_get_lvols", 00:30:57.158 "bdev_lvol_get_lvstores", 00:30:57.158 "bdev_lvol_delete", 00:30:57.158 "bdev_lvol_set_read_only", 00:30:57.158 "bdev_lvol_resize", 00:30:57.158 "bdev_lvol_decouple_parent", 00:30:57.158 "bdev_lvol_inflate", 00:30:57.158 "bdev_lvol_rename", 00:30:57.158 "bdev_lvol_clone_bdev", 00:30:57.158 "bdev_lvol_clone", 00:30:57.158 "bdev_lvol_snapshot", 00:30:57.158 "bdev_lvol_create", 00:30:57.158 "bdev_lvol_delete_lvstore", 00:30:57.158 "bdev_lvol_rename_lvstore", 00:30:57.158 "bdev_lvol_create_lvstore", 00:30:57.158 "bdev_raid_set_options", 00:30:57.158 "bdev_raid_remove_base_bdev", 00:30:57.158 "bdev_raid_add_base_bdev", 00:30:57.158 "bdev_raid_delete", 00:30:57.158 "bdev_raid_create", 00:30:57.158 "bdev_raid_get_bdevs", 00:30:57.158 "bdev_error_inject_error", 00:30:57.158 "bdev_error_delete", 00:30:57.158 "bdev_error_create", 00:30:57.158 "bdev_split_delete", 00:30:57.158 "bdev_split_create", 00:30:57.158 "bdev_delay_delete", 00:30:57.158 "bdev_delay_create", 00:30:57.158 "bdev_delay_update_latency", 00:30:57.158 "bdev_zone_block_delete", 00:30:57.158 "bdev_zone_block_create", 00:30:57.158 "blobfs_create", 00:30:57.158 "blobfs_detect", 00:30:57.158 "blobfs_set_cache_size", 00:30:57.158 "bdev_xnvme_delete", 00:30:57.158 "bdev_xnvme_create", 00:30:57.158 "bdev_aio_delete", 00:30:57.158 "bdev_aio_rescan", 00:30:57.158 "bdev_aio_create", 00:30:57.158 "bdev_ftl_set_property", 00:30:57.158 "bdev_ftl_get_properties", 00:30:57.158 "bdev_ftl_get_stats", 00:30:57.158 "bdev_ftl_unmap", 00:30:57.158 "bdev_ftl_unload", 00:30:57.158 "bdev_ftl_delete", 00:30:57.159 "bdev_ftl_load", 00:30:57.159 "bdev_ftl_create", 00:30:57.159 "bdev_virtio_attach_controller", 00:30:57.159 "bdev_virtio_scsi_get_devices", 00:30:57.159 "bdev_virtio_detach_controller", 00:30:57.159 "bdev_virtio_blk_set_hotplug", 00:30:57.159 "bdev_iscsi_delete", 00:30:57.159 "bdev_iscsi_create", 00:30:57.159 "bdev_iscsi_set_options", 00:30:57.159 "accel_error_inject_error", 00:30:57.159 "ioat_scan_accel_module", 00:30:57.159 "dsa_scan_accel_module", 00:30:57.159 "iaa_scan_accel_module", 00:30:57.159 "keyring_file_remove_key", 00:30:57.159 "keyring_file_add_key", 00:30:57.159 "iscsi_set_options", 00:30:57.159 "iscsi_get_auth_groups", 00:30:57.159 "iscsi_auth_group_remove_secret", 00:30:57.159 "iscsi_auth_group_add_secret", 00:30:57.159 "iscsi_delete_auth_group", 00:30:57.159 "iscsi_create_auth_group", 00:30:57.159 "iscsi_set_discovery_auth", 00:30:57.159 "iscsi_get_options", 00:30:57.159 "iscsi_target_node_request_logout", 00:30:57.159 "iscsi_target_node_set_redirect", 00:30:57.159 "iscsi_target_node_set_auth", 00:30:57.159 "iscsi_target_node_add_lun", 00:30:57.159 "iscsi_get_stats", 00:30:57.159 "iscsi_get_connections", 00:30:57.159 "iscsi_portal_group_set_auth", 00:30:57.159 "iscsi_start_portal_group", 00:30:57.159 "iscsi_delete_portal_group", 00:30:57.159 "iscsi_create_portal_group", 00:30:57.159 "iscsi_get_portal_groups", 00:30:57.159 "iscsi_delete_target_node", 00:30:57.159 "iscsi_target_node_remove_pg_ig_maps", 00:30:57.159 "iscsi_target_node_add_pg_ig_maps", 00:30:57.159 "iscsi_create_target_node", 00:30:57.159 "iscsi_get_target_nodes", 00:30:57.159 "iscsi_delete_initiator_group", 00:30:57.159 "iscsi_initiator_group_remove_initiators", 00:30:57.159 "iscsi_initiator_group_add_initiators", 00:30:57.159 "iscsi_create_initiator_group", 00:30:57.159 "iscsi_get_initiator_groups", 00:30:57.159 "nvmf_set_crdt", 00:30:57.159 "nvmf_set_config", 00:30:57.159 "nvmf_set_max_subsystems", 00:30:57.159 "nvmf_subsystem_get_listeners", 00:30:57.159 "nvmf_subsystem_get_qpairs", 00:30:57.159 "nvmf_subsystem_get_controllers", 00:30:57.159 "nvmf_get_stats", 00:30:57.159 "nvmf_get_transports", 00:30:57.159 "nvmf_create_transport", 00:30:57.159 "nvmf_get_targets", 00:30:57.159 "nvmf_delete_target", 00:30:57.159 "nvmf_create_target", 00:30:57.159 "nvmf_subsystem_allow_any_host", 00:30:57.159 "nvmf_subsystem_remove_host", 00:30:57.159 "nvmf_subsystem_add_host", 00:30:57.159 "nvmf_ns_remove_host", 00:30:57.159 "nvmf_ns_add_host", 00:30:57.159 "nvmf_subsystem_remove_ns", 00:30:57.159 "nvmf_subsystem_add_ns", 00:30:57.159 "nvmf_subsystem_listener_set_ana_state", 00:30:57.159 "nvmf_discovery_get_referrals", 00:30:57.159 "nvmf_discovery_remove_referral", 00:30:57.159 "nvmf_discovery_add_referral", 00:30:57.159 "nvmf_subsystem_remove_listener", 00:30:57.159 "nvmf_subsystem_add_listener", 00:30:57.159 "nvmf_delete_subsystem", 00:30:57.159 "nvmf_create_subsystem", 00:30:57.159 "nvmf_get_subsystems", 00:30:57.159 "env_dpdk_get_mem_stats", 00:30:57.159 "nbd_get_disks", 00:30:57.159 "nbd_stop_disk", 00:30:57.159 "nbd_start_disk", 00:30:57.159 "ublk_recover_disk", 00:30:57.159 "ublk_get_disks", 00:30:57.159 "ublk_stop_disk", 00:30:57.159 "ublk_start_disk", 00:30:57.159 "ublk_destroy_target", 00:30:57.159 "ublk_create_target", 00:30:57.159 "virtio_blk_create_transport", 00:30:57.159 "virtio_blk_get_transports", 00:30:57.159 "vhost_controller_set_coalescing", 00:30:57.159 "vhost_get_controllers", 00:30:57.159 "vhost_delete_controller", 00:30:57.159 "vhost_create_blk_controller", 00:30:57.159 "vhost_scsi_controller_remove_target", 00:30:57.159 "vhost_scsi_controller_add_target", 00:30:57.159 "vhost_start_scsi_controller", 00:30:57.159 "vhost_create_scsi_controller", 00:30:57.159 "thread_set_cpumask", 00:30:57.159 "framework_get_scheduler", 00:30:57.159 "framework_set_scheduler", 00:30:57.159 "framework_get_reactors", 00:30:57.159 "thread_get_io_channels", 00:30:57.159 "thread_get_pollers", 00:30:57.159 "thread_get_stats", 00:30:57.159 "framework_monitor_context_switch", 00:30:57.159 "spdk_kill_instance", 00:30:57.159 "log_enable_timestamps", 00:30:57.159 "log_get_flags", 00:30:57.159 "log_clear_flag", 00:30:57.159 "log_set_flag", 00:30:57.159 "log_get_level", 00:30:57.159 "log_set_level", 00:30:57.159 "log_get_print_level", 00:30:57.159 "log_set_print_level", 00:30:57.159 "framework_enable_cpumask_locks", 00:30:57.159 "framework_disable_cpumask_locks", 00:30:57.159 "framework_wait_init", 00:30:57.159 "framework_start_init", 00:30:57.159 "scsi_get_devices", 00:30:57.159 "bdev_get_histogram", 00:30:57.159 "bdev_enable_histogram", 00:30:57.159 "bdev_set_qos_limit", 00:30:57.159 "bdev_set_qd_sampling_period", 00:30:57.159 "bdev_get_bdevs", 00:30:57.159 "bdev_reset_iostat", 00:30:57.159 "bdev_get_iostat", 00:30:57.159 "bdev_examine", 00:30:57.159 "bdev_wait_for_examine", 00:30:57.159 "bdev_set_options", 00:30:57.159 "notify_get_notifications", 00:30:57.159 "notify_get_types", 00:30:57.159 "accel_get_stats", 00:30:57.159 "accel_set_options", 00:30:57.159 "accel_set_driver", 00:30:57.159 "accel_crypto_key_destroy", 00:30:57.159 "accel_crypto_keys_get", 00:30:57.159 "accel_crypto_key_create", 00:30:57.159 "accel_assign_opc", 00:30:57.159 "accel_get_module_info", 00:30:57.159 "accel_get_opc_assignments", 00:30:57.159 "vmd_rescan", 00:30:57.159 "vmd_remove_device", 00:30:57.159 "vmd_enable", 00:30:57.159 "sock_set_default_impl", 00:30:57.159 "sock_impl_set_options", 00:30:57.159 "sock_impl_get_options", 00:30:57.159 "iobuf_get_stats", 00:30:57.159 "iobuf_set_options", 00:30:57.159 "framework_get_pci_devices", 00:30:57.159 "framework_get_config", 00:30:57.159 "framework_get_subsystems", 00:30:57.159 "trace_get_info", 00:30:57.159 "trace_get_tpoint_group_mask", 00:30:57.159 "trace_disable_tpoint_group", 00:30:57.159 "trace_enable_tpoint_group", 00:30:57.159 "trace_clear_tpoint_mask", 00:30:57.159 "trace_set_tpoint_mask", 00:30:57.159 "keyring_get_keys", 00:30:57.159 "spdk_get_version", 00:30:57.159 "rpc_get_methods" 00:30:57.159 ] 00:30:57.159 08:59:59 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:30:57.159 08:59:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:57.159 08:59:59 -- common/autotest_common.sh@10 -- # set +x 00:30:57.418 08:59:59 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:30:57.418 08:59:59 -- spdkcli/tcp.sh@38 -- # killprocess 62537 00:30:57.418 08:59:59 -- common/autotest_common.sh@936 -- # '[' -z 62537 ']' 00:30:57.418 08:59:59 -- common/autotest_common.sh@940 -- # kill -0 62537 00:30:57.418 08:59:59 -- common/autotest_common.sh@941 -- # uname 00:30:57.418 08:59:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:57.418 08:59:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62537 00:30:57.418 08:59:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:57.418 killing process with pid 62537 00:30:57.418 08:59:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:57.418 08:59:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62537' 00:30:57.418 08:59:59 -- common/autotest_common.sh@955 -- # kill 62537 00:30:57.418 08:59:59 -- common/autotest_common.sh@960 -- # wait 62537 00:31:00.712 00:31:00.712 real 0m4.903s 00:31:00.712 user 0m8.639s 00:31:00.712 sys 0m0.643s 00:31:00.712 09:00:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:00.712 09:00:02 -- common/autotest_common.sh@10 -- # set +x 00:31:00.712 ************************************ 00:31:00.712 END TEST spdkcli_tcp 00:31:00.712 ************************************ 00:31:00.712 09:00:02 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:31:00.712 09:00:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:00.712 09:00:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:00.712 09:00:02 -- common/autotest_common.sh@10 -- # set +x 00:31:00.712 ************************************ 00:31:00.712 START TEST dpdk_mem_utility 00:31:00.712 ************************************ 00:31:00.712 09:00:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:31:00.712 * Looking for test storage... 00:31:00.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:31:00.712 09:00:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:31:00.712 09:00:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=62667 00:31:00.712 09:00:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 62667 00:31:00.712 09:00:02 -- common/autotest_common.sh@817 -- # '[' -z 62667 ']' 00:31:00.712 09:00:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.712 09:00:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:00.712 09:00:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.712 09:00:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:00.712 09:00:02 -- common/autotest_common.sh@10 -- # set +x 00:31:00.712 09:00:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:00.712 [2024-04-18 09:00:02.475535] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:31:00.712 [2024-04-18 09:00:02.475704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62667 ] 00:31:00.712 [2024-04-18 09:00:02.659108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.971 [2024-04-18 09:00:02.921880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.345 09:00:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:02.345 09:00:04 -- common/autotest_common.sh@850 -- # return 0 00:31:02.345 09:00:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:31:02.345 09:00:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:31:02.345 09:00:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.345 09:00:04 -- common/autotest_common.sh@10 -- # set +x 00:31:02.345 { 00:31:02.345 "filename": "/tmp/spdk_mem_dump.txt" 00:31:02.345 } 00:31:02.345 09:00:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.345 09:00:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:31:02.345 DPDK memory size 820.000000 MiB in 1 heap(s) 00:31:02.345 1 heaps totaling size 820.000000 MiB 00:31:02.345 size: 820.000000 MiB heap id: 0 00:31:02.345 end heaps---------- 00:31:02.345 8 mempools totaling size 598.116089 MiB 00:31:02.345 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:31:02.345 size: 158.602051 MiB name: PDU_data_out_Pool 00:31:02.346 size: 84.521057 MiB name: bdev_io_62667 00:31:02.346 size: 51.011292 MiB name: evtpool_62667 00:31:02.346 size: 50.003479 MiB name: msgpool_62667 00:31:02.346 size: 21.763794 MiB name: PDU_Pool 00:31:02.346 size: 19.513306 MiB name: SCSI_TASK_Pool 00:31:02.346 size: 0.026123 MiB name: Session_Pool 00:31:02.346 end mempools------- 00:31:02.346 6 memzones totaling size 4.142822 MiB 00:31:02.346 size: 1.000366 MiB name: RG_ring_0_62667 00:31:02.346 size: 1.000366 MiB name: RG_ring_1_62667 00:31:02.346 size: 1.000366 MiB name: RG_ring_4_62667 00:31:02.346 size: 1.000366 MiB name: RG_ring_5_62667 00:31:02.346 size: 0.125366 MiB name: RG_ring_2_62667 00:31:02.346 size: 0.015991 MiB name: RG_ring_3_62667 00:31:02.346 end memzones------- 00:31:02.346 09:00:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:31:02.346 heap id: 0 total size: 820.000000 MiB number of busy elements: 226 number of free elements: 18 00:31:02.346 list of free elements. size: 18.469727 MiB 00:31:02.346 element at address: 0x200000400000 with size: 1.999451 MiB 00:31:02.346 element at address: 0x200000800000 with size: 1.996887 MiB 00:31:02.346 element at address: 0x200007000000 with size: 1.995972 MiB 00:31:02.346 element at address: 0x20000b200000 with size: 1.995972 MiB 00:31:02.346 element at address: 0x200019100040 with size: 0.999939 MiB 00:31:02.346 element at address: 0x200019500040 with size: 0.999939 MiB 00:31:02.346 element at address: 0x200019600000 with size: 0.999329 MiB 00:31:02.346 element at address: 0x200003e00000 with size: 0.996094 MiB 00:31:02.346 element at address: 0x200032200000 with size: 0.994324 MiB 00:31:02.346 element at address: 0x200018e00000 with size: 0.959656 MiB 00:31:02.346 element at address: 0x200019900040 with size: 0.937256 MiB 00:31:02.346 element at address: 0x200000200000 with size: 0.834106 MiB 00:31:02.346 element at address: 0x20001b000000 with size: 0.561218 MiB 00:31:02.346 element at address: 0x200019200000 with size: 0.489197 MiB 00:31:02.346 element at address: 0x200019a00000 with size: 0.485413 MiB 00:31:02.346 element at address: 0x200013800000 with size: 0.469116 MiB 00:31:02.346 element at address: 0x200028400000 with size: 0.399719 MiB 00:31:02.346 element at address: 0x200003a00000 with size: 0.356140 MiB 00:31:02.346 list of standard malloc elements. size: 199.265869 MiB 00:31:02.346 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:31:02.346 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:31:02.346 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:31:02.346 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:31:02.346 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:31:02.346 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:31:02.346 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:31:02.346 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:31:02.346 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:31:02.346 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:31:02.346 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:31:02.346 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:31:02.346 element at address: 0x200003aff980 with size: 0.000244 MiB 00:31:02.346 element at address: 0x200003affa80 with size: 0.000244 MiB 00:31:02.346 element at address: 0x200003eff000 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:31:02.346 element at address: 0x200013878180 with size: 0.000244 MiB 00:31:02.346 element at address: 0x200013878280 with size: 0.000244 MiB 00:31:02.346 element at address: 0x200013878380 with size: 0.000244 MiB 00:31:02.346 element at address: 0x200013878480 with size: 0.000244 MiB 00:31:02.346 element at address: 0x200013878580 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:31:02.346 element at address: 0x200019abc680 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b08fac0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b08fbc0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b08fcc0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b08fdc0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:31:02.346 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:31:02.347 element at address: 0x200028466540 with size: 0.000244 MiB 00:31:02.347 element at address: 0x200028466640 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846d300 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846d580 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846d680 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846d780 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846d880 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846d980 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846da80 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846db80 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846de80 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846df80 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846e080 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846e180 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846e280 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846e380 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846e480 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846e580 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846e680 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846e780 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846e880 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846e980 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846f080 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846f180 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846f280 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846f380 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846f480 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846f580 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846f680 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846f780 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846f880 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846f980 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:31:02.347 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:31:02.347 list of memzone associated elements. size: 602.264404 MiB 00:31:02.347 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:31:02.347 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:31:02.347 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:31:02.347 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:31:02.347 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:31:02.347 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_62667_0 00:31:02.347 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:31:02.347 associated memzone info: size: 48.002930 MiB name: MP_evtpool_62667_0 00:31:02.347 element at address: 0x200003fff340 with size: 48.003113 MiB 00:31:02.347 associated memzone info: size: 48.002930 MiB name: MP_msgpool_62667_0 00:31:02.347 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:31:02.347 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:31:02.347 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:31:02.347 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:31:02.347 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:31:02.347 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_62667 00:31:02.347 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:31:02.347 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_62667 00:31:02.347 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:31:02.347 associated memzone info: size: 1.007996 MiB name: MP_evtpool_62667 00:31:02.347 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:31:02.347 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:31:02.347 element at address: 0x200019abc780 with size: 1.008179 MiB 00:31:02.347 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:31:02.347 element at address: 0x200018efde00 with size: 1.008179 MiB 00:31:02.347 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:31:02.347 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:31:02.347 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:31:02.347 element at address: 0x200003eff100 with size: 1.000549 MiB 00:31:02.347 associated memzone info: size: 1.000366 MiB name: RG_ring_0_62667 00:31:02.347 element at address: 0x200003affb80 with size: 1.000549 MiB 00:31:02.347 associated memzone info: size: 1.000366 MiB name: RG_ring_1_62667 00:31:02.347 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:31:02.347 associated memzone info: size: 1.000366 MiB name: RG_ring_4_62667 00:31:02.347 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:31:02.347 associated memzone info: size: 1.000366 MiB name: RG_ring_5_62667 00:31:02.347 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:31:02.347 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_62667 00:31:02.347 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:31:02.347 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:31:02.347 element at address: 0x200013878680 with size: 0.500549 MiB 00:31:02.347 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:31:02.347 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:31:02.347 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:31:02.348 element at address: 0x200003adf740 with size: 0.125549 MiB 00:31:02.348 associated memzone info: size: 0.125366 MiB name: RG_ring_2_62667 00:31:02.348 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:31:02.348 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:31:02.348 element at address: 0x200028466740 with size: 0.023804 MiB 00:31:02.348 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:31:02.348 element at address: 0x200003adb500 with size: 0.016174 MiB 00:31:02.348 associated memzone info: size: 0.015991 MiB name: RG_ring_3_62667 00:31:02.348 element at address: 0x20002846c8c0 with size: 0.002502 MiB 00:31:02.348 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:31:02.348 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:31:02.348 associated memzone info: size: 0.000183 MiB name: MP_msgpool_62667 00:31:02.348 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:31:02.348 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_62667 00:31:02.348 element at address: 0x20002846d400 with size: 0.000366 MiB 00:31:02.348 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:31:02.348 09:00:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:31:02.348 09:00:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 62667 00:31:02.348 09:00:04 -- common/autotest_common.sh@936 -- # '[' -z 62667 ']' 00:31:02.348 09:00:04 -- common/autotest_common.sh@940 -- # kill -0 62667 00:31:02.348 09:00:04 -- common/autotest_common.sh@941 -- # uname 00:31:02.348 09:00:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:02.348 09:00:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62667 00:31:02.348 09:00:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:02.348 09:00:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:02.348 09:00:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62667' 00:31:02.348 killing process with pid 62667 00:31:02.348 09:00:04 -- common/autotest_common.sh@955 -- # kill 62667 00:31:02.348 09:00:04 -- common/autotest_common.sh@960 -- # wait 62667 00:31:05.629 00:31:05.629 real 0m4.873s 00:31:05.629 user 0m4.931s 00:31:05.629 sys 0m0.585s 00:31:05.629 09:00:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:05.629 09:00:07 -- common/autotest_common.sh@10 -- # set +x 00:31:05.629 ************************************ 00:31:05.629 END TEST dpdk_mem_utility 00:31:05.629 ************************************ 00:31:05.629 09:00:07 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:31:05.629 09:00:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:05.629 09:00:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:05.629 09:00:07 -- common/autotest_common.sh@10 -- # set +x 00:31:05.629 ************************************ 00:31:05.629 START TEST event 00:31:05.629 ************************************ 00:31:05.629 09:00:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:31:05.629 * Looking for test storage... 00:31:05.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:31:05.629 09:00:07 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:31:05.629 09:00:07 -- bdev/nbd_common.sh@6 -- # set -e 00:31:05.629 09:00:07 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:31:05.629 09:00:07 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:31:05.629 09:00:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:05.629 09:00:07 -- common/autotest_common.sh@10 -- # set +x 00:31:05.629 ************************************ 00:31:05.629 START TEST event_perf 00:31:05.629 ************************************ 00:31:05.629 09:00:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:31:05.629 Running I/O for 1 seconds...[2024-04-18 09:00:07.484246] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:31:05.629 [2024-04-18 09:00:07.484704] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62787 ] 00:31:05.629 [2024-04-18 09:00:07.686189] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:05.887 [2024-04-18 09:00:07.974955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.887 [2024-04-18 09:00:07.975034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:05.887 [2024-04-18 09:00:07.975132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:05.887 [2024-04-18 09:00:07.975327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.781 Running I/O for 1 seconds... 00:31:07.781 lcore 0: 174770 00:31:07.781 lcore 1: 174770 00:31:07.781 lcore 2: 174770 00:31:07.781 lcore 3: 174771 00:31:07.781 done. 00:31:07.781 00:31:07.781 real 0m2.078s 00:31:07.781 user 0m4.782s 00:31:07.781 sys 0m0.164s 00:31:07.781 09:00:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:07.781 ************************************ 00:31:07.781 END TEST event_perf 00:31:07.781 ************************************ 00:31:07.781 09:00:09 -- common/autotest_common.sh@10 -- # set +x 00:31:07.781 09:00:09 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:31:07.781 09:00:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:31:07.781 09:00:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:07.781 09:00:09 -- common/autotest_common.sh@10 -- # set +x 00:31:07.781 ************************************ 00:31:07.781 START TEST event_reactor 00:31:07.781 ************************************ 00:31:07.781 09:00:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:31:07.781 [2024-04-18 09:00:09.647233] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:31:07.781 [2024-04-18 09:00:09.647597] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62836 ] 00:31:07.781 [2024-04-18 09:00:09.814618] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.344 [2024-04-18 09:00:10.150894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.737 test_start 00:31:09.737 oneshot 00:31:09.737 tick 100 00:31:09.737 tick 100 00:31:09.737 tick 250 00:31:09.737 tick 100 00:31:09.737 tick 100 00:31:09.737 tick 100 00:31:09.737 tick 250 00:31:09.737 tick 500 00:31:09.737 tick 100 00:31:09.737 tick 100 00:31:09.737 tick 250 00:31:09.737 tick 100 00:31:09.737 tick 100 00:31:09.737 test_end 00:31:09.737 00:31:09.737 real 0m2.096s 00:31:09.737 user 0m1.849s 00:31:09.737 sys 0m0.127s 00:31:09.737 09:00:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:09.737 09:00:11 -- common/autotest_common.sh@10 -- # set +x 00:31:09.737 ************************************ 00:31:09.737 END TEST event_reactor 00:31:09.737 ************************************ 00:31:09.737 09:00:11 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:31:09.737 09:00:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:31:09.737 09:00:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:09.737 09:00:11 -- common/autotest_common.sh@10 -- # set +x 00:31:09.737 ************************************ 00:31:09.737 START TEST event_reactor_perf 00:31:09.737 ************************************ 00:31:09.737 09:00:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:31:09.996 [2024-04-18 09:00:11.857243] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:31:09.996 [2024-04-18 09:00:11.857711] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62883 ] 00:31:09.996 [2024-04-18 09:00:12.043051] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.254 [2024-04-18 09:00:12.339461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.150 test_start 00:31:12.150 test_end 00:31:12.150 Performance: 274504 events per second 00:31:12.150 00:31:12.150 real 0m2.050s 00:31:12.150 user 0m1.790s 00:31:12.150 sys 0m0.143s 00:31:12.150 ************************************ 00:31:12.150 END TEST event_reactor_perf 00:31:12.150 ************************************ 00:31:12.150 09:00:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:12.150 09:00:13 -- common/autotest_common.sh@10 -- # set +x 00:31:12.150 09:00:13 -- event/event.sh@49 -- # uname -s 00:31:12.150 09:00:13 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:31:12.150 09:00:13 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:31:12.150 09:00:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:12.150 09:00:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:12.150 09:00:13 -- common/autotest_common.sh@10 -- # set +x 00:31:12.150 ************************************ 00:31:12.150 START TEST event_scheduler 00:31:12.150 ************************************ 00:31:12.150 09:00:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:31:12.150 * Looking for test storage... 00:31:12.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:31:12.150 09:00:14 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:31:12.150 09:00:14 -- scheduler/scheduler.sh@35 -- # scheduler_pid=62962 00:31:12.151 09:00:14 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:31:12.151 09:00:14 -- scheduler/scheduler.sh@37 -- # waitforlisten 62962 00:31:12.151 09:00:14 -- common/autotest_common.sh@817 -- # '[' -z 62962 ']' 00:31:12.151 09:00:14 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:31:12.151 09:00:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.151 09:00:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:12.151 09:00:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:12.151 09:00:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:12.151 09:00:14 -- common/autotest_common.sh@10 -- # set +x 00:31:12.151 [2024-04-18 09:00:14.169261] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:31:12.151 [2024-04-18 09:00:14.169638] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62962 ] 00:31:12.408 [2024-04-18 09:00:14.342294] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:12.666 [2024-04-18 09:00:14.624960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.666 [2024-04-18 09:00:14.625037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.666 [2024-04-18 09:00:14.625108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:12.666 [2024-04-18 09:00:14.625114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:13.233 09:00:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:13.233 09:00:15 -- common/autotest_common.sh@850 -- # return 0 00:31:13.233 09:00:15 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:31:13.233 09:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.234 09:00:15 -- common/autotest_common.sh@10 -- # set +x 00:31:13.234 POWER: Env isn't set yet! 00:31:13.234 POWER: Attempting to initialise ACPI cpufreq power management... 00:31:13.234 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:31:13.234 POWER: Cannot set governor of lcore 0 to userspace 00:31:13.234 POWER: Attempting to initialise PSTAT power management... 00:31:13.234 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:31:13.234 POWER: Cannot set governor of lcore 0 to performance 00:31:13.234 POWER: Attempting to initialise AMD PSTATE power management... 00:31:13.234 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:31:13.234 POWER: Cannot set governor of lcore 0 to userspace 00:31:13.234 POWER: Attempting to initialise CPPC power management... 00:31:13.234 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:31:13.234 POWER: Cannot set governor of lcore 0 to userspace 00:31:13.234 POWER: Attempting to initialise VM power management... 00:31:13.234 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:31:13.234 POWER: Unable to set Power Management Environment for lcore 0 00:31:13.234 [2024-04-18 09:00:15.077076] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:31:13.234 [2024-04-18 09:00:15.077223] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:31:13.234 [2024-04-18 09:00:15.077354] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:31:13.234 09:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.234 09:00:15 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:31:13.234 09:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.234 09:00:15 -- common/autotest_common.sh@10 -- # set +x 00:31:13.492 [2024-04-18 09:00:15.519307] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:31:13.492 09:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.492 09:00:15 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:31:13.492 09:00:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:13.492 09:00:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:13.492 09:00:15 -- common/autotest_common.sh@10 -- # set +x 00:31:13.751 ************************************ 00:31:13.751 START TEST scheduler_create_thread 00:31:13.751 ************************************ 00:31:13.751 09:00:15 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:31:13.751 09:00:15 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:31:13.751 09:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.751 09:00:15 -- common/autotest_common.sh@10 -- # set +x 00:31:13.751 2 00:31:13.751 09:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.751 09:00:15 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:31:13.751 09:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.751 09:00:15 -- common/autotest_common.sh@10 -- # set +x 00:31:13.751 3 00:31:13.751 09:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.751 09:00:15 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:31:13.751 09:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.752 09:00:15 -- common/autotest_common.sh@10 -- # set +x 00:31:13.752 4 00:31:13.752 09:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.752 09:00:15 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:31:13.752 09:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.752 09:00:15 -- common/autotest_common.sh@10 -- # set +x 00:31:13.752 5 00:31:13.752 09:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.752 09:00:15 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:31:13.752 09:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.752 09:00:15 -- common/autotest_common.sh@10 -- # set +x 00:31:13.752 6 00:31:13.752 09:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.752 09:00:15 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:31:13.752 09:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.752 09:00:15 -- common/autotest_common.sh@10 -- # set +x 00:31:13.752 7 00:31:13.752 09:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.752 09:00:15 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:31:13.752 09:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.752 09:00:15 -- common/autotest_common.sh@10 -- # set +x 00:31:13.752 8 00:31:13.752 09:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.752 09:00:15 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:31:13.752 09:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.752 09:00:15 -- common/autotest_common.sh@10 -- # set +x 00:31:13.752 9 00:31:13.752 09:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.752 09:00:15 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:31:13.752 09:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.752 09:00:15 -- common/autotest_common.sh@10 -- # set +x 00:31:13.752 10 00:31:13.752 09:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.752 09:00:15 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:31:13.752 09:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.752 09:00:15 -- common/autotest_common.sh@10 -- # set +x 00:31:13.752 09:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.752 09:00:15 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:31:13.752 09:00:15 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:31:13.752 09:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.752 09:00:15 -- common/autotest_common.sh@10 -- # set +x 00:31:13.752 09:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.752 09:00:15 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:31:13.752 09:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.752 09:00:15 -- common/autotest_common.sh@10 -- # set +x 00:31:14.685 09:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.685 09:00:16 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:31:14.685 09:00:16 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:31:14.685 09:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.685 09:00:16 -- common/autotest_common.sh@10 -- # set +x 00:31:16.085 09:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.085 00:31:16.085 real 0m2.138s 00:31:16.085 user 0m0.011s 00:31:16.085 sys 0m0.009s 00:31:16.085 ************************************ 00:31:16.085 END TEST scheduler_create_thread 00:31:16.085 ************************************ 00:31:16.085 09:00:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:16.085 09:00:17 -- common/autotest_common.sh@10 -- # set +x 00:31:16.085 09:00:17 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:31:16.085 09:00:17 -- scheduler/scheduler.sh@46 -- # killprocess 62962 00:31:16.085 09:00:17 -- common/autotest_common.sh@936 -- # '[' -z 62962 ']' 00:31:16.085 09:00:17 -- common/autotest_common.sh@940 -- # kill -0 62962 00:31:16.085 09:00:17 -- common/autotest_common.sh@941 -- # uname 00:31:16.085 09:00:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:16.085 09:00:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62962 00:31:16.085 09:00:17 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:31:16.085 killing process with pid 62962 00:31:16.085 09:00:17 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:31:16.085 09:00:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62962' 00:31:16.085 09:00:17 -- common/autotest_common.sh@955 -- # kill 62962 00:31:16.085 09:00:17 -- common/autotest_common.sh@960 -- # wait 62962 00:31:16.344 [2024-04-18 09:00:18.227763] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:31:17.718 00:31:17.718 real 0m5.794s 00:31:17.718 user 0m9.759s 00:31:17.718 sys 0m0.505s 00:31:17.718 09:00:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:17.718 ************************************ 00:31:17.718 END TEST event_scheduler 00:31:17.718 ************************************ 00:31:17.718 09:00:19 -- common/autotest_common.sh@10 -- # set +x 00:31:17.718 09:00:19 -- event/event.sh@51 -- # modprobe -n nbd 00:31:17.976 09:00:19 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:31:17.976 09:00:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:17.976 09:00:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:17.976 09:00:19 -- common/autotest_common.sh@10 -- # set +x 00:31:17.976 ************************************ 00:31:17.976 START TEST app_repeat 00:31:17.976 ************************************ 00:31:17.976 09:00:19 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:31:17.976 09:00:19 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:17.976 09:00:19 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:17.976 09:00:19 -- event/event.sh@13 -- # local nbd_list 00:31:17.976 09:00:19 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:31:17.976 09:00:19 -- event/event.sh@14 -- # local bdev_list 00:31:17.976 09:00:19 -- event/event.sh@15 -- # local repeat_times=4 00:31:17.976 09:00:19 -- event/event.sh@17 -- # modprobe nbd 00:31:17.976 09:00:19 -- event/event.sh@19 -- # repeat_pid=63083 00:31:17.976 09:00:19 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:31:17.976 09:00:19 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:31:17.976 09:00:19 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63083' 00:31:17.976 Process app_repeat pid: 63083 00:31:17.976 09:00:19 -- event/event.sh@23 -- # for i in {0..2} 00:31:17.976 09:00:19 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:31:17.976 spdk_app_start Round 0 00:31:17.976 09:00:19 -- event/event.sh@25 -- # waitforlisten 63083 /var/tmp/spdk-nbd.sock 00:31:17.976 09:00:19 -- common/autotest_common.sh@817 -- # '[' -z 63083 ']' 00:31:17.976 09:00:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:17.976 09:00:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:17.976 09:00:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:17.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:17.976 09:00:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:17.976 09:00:19 -- common/autotest_common.sh@10 -- # set +x 00:31:17.976 [2024-04-18 09:00:19.992922] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:31:17.976 [2024-04-18 09:00:19.993428] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63083 ] 00:31:18.233 [2024-04-18 09:00:20.190010] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:18.491 [2024-04-18 09:00:20.473756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.491 [2024-04-18 09:00:20.473771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.058 09:00:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:19.058 09:00:20 -- common/autotest_common.sh@850 -- # return 0 00:31:19.058 09:00:20 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:31:19.316 Malloc0 00:31:19.316 09:00:21 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:31:19.573 Malloc1 00:31:19.573 09:00:21 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:31:19.573 09:00:21 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:19.574 09:00:21 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:31:19.574 09:00:21 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:19.574 09:00:21 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:19.574 09:00:21 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:19.574 09:00:21 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:31:19.574 09:00:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:19.574 09:00:21 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:31:19.574 09:00:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:19.574 09:00:21 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:19.574 09:00:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:19.574 09:00:21 -- bdev/nbd_common.sh@12 -- # local i 00:31:19.574 09:00:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:19.574 09:00:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:19.574 09:00:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:31:19.832 /dev/nbd0 00:31:19.832 09:00:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:19.832 09:00:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:19.832 09:00:21 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:31:19.832 09:00:21 -- common/autotest_common.sh@855 -- # local i 00:31:19.832 09:00:21 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:19.832 09:00:21 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:19.832 09:00:21 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:31:19.832 09:00:21 -- common/autotest_common.sh@859 -- # break 00:31:19.832 09:00:21 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:19.832 09:00:21 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:19.832 09:00:21 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:31:19.832 1+0 records in 00:31:19.832 1+0 records out 00:31:19.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533807 s, 7.7 MB/s 00:31:19.832 09:00:21 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:20.089 09:00:21 -- common/autotest_common.sh@872 -- # size=4096 00:31:20.089 09:00:21 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:20.089 09:00:21 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:20.089 09:00:21 -- common/autotest_common.sh@875 -- # return 0 00:31:20.089 09:00:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:20.089 09:00:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:20.089 09:00:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:31:20.089 /dev/nbd1 00:31:20.354 09:00:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:20.354 09:00:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:20.354 09:00:22 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:31:20.354 09:00:22 -- common/autotest_common.sh@855 -- # local i 00:31:20.354 09:00:22 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:20.354 09:00:22 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:20.355 09:00:22 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:31:20.355 09:00:22 -- common/autotest_common.sh@859 -- # break 00:31:20.355 09:00:22 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:20.355 09:00:22 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:20.355 09:00:22 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:31:20.355 1+0 records in 00:31:20.355 1+0 records out 00:31:20.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422335 s, 9.7 MB/s 00:31:20.355 09:00:22 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:20.355 09:00:22 -- common/autotest_common.sh@872 -- # size=4096 00:31:20.355 09:00:22 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:20.355 09:00:22 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:20.355 09:00:22 -- common/autotest_common.sh@875 -- # return 0 00:31:20.355 09:00:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:20.355 09:00:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:20.355 09:00:22 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:20.355 09:00:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:20.355 09:00:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:20.613 09:00:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:20.613 { 00:31:20.613 "nbd_device": "/dev/nbd0", 00:31:20.613 "bdev_name": "Malloc0" 00:31:20.613 }, 00:31:20.613 { 00:31:20.613 "nbd_device": "/dev/nbd1", 00:31:20.613 "bdev_name": "Malloc1" 00:31:20.613 } 00:31:20.613 ]' 00:31:20.613 09:00:22 -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:20.613 { 00:31:20.613 "nbd_device": "/dev/nbd0", 00:31:20.613 "bdev_name": "Malloc0" 00:31:20.613 }, 00:31:20.613 { 00:31:20.613 "nbd_device": "/dev/nbd1", 00:31:20.613 "bdev_name": "Malloc1" 00:31:20.613 } 00:31:20.613 ]' 00:31:20.613 09:00:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:20.613 09:00:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:31:20.613 /dev/nbd1' 00:31:20.613 09:00:22 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:31:20.613 /dev/nbd1' 00:31:20.613 09:00:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:20.613 09:00:22 -- bdev/nbd_common.sh@65 -- # count=2 00:31:20.613 09:00:22 -- bdev/nbd_common.sh@66 -- # echo 2 00:31:20.613 09:00:22 -- bdev/nbd_common.sh@95 -- # count=2 00:31:20.613 09:00:22 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:31:20.613 09:00:22 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:31:20.613 09:00:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:20.613 09:00:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:31:20.614 256+0 records in 00:31:20.614 256+0 records out 00:31:20.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00718048 s, 146 MB/s 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:20.614 256+0 records in 00:31:20.614 256+0 records out 00:31:20.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287097 s, 36.5 MB/s 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:31:20.614 256+0 records in 00:31:20.614 256+0 records out 00:31:20.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293859 s, 35.7 MB/s 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@51 -- # local i 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:20.614 09:00:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:20.871 09:00:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:20.871 09:00:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:20.871 09:00:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:20.871 09:00:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:20.871 09:00:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:20.871 09:00:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:20.871 09:00:22 -- bdev/nbd_common.sh@41 -- # break 00:31:20.871 09:00:22 -- bdev/nbd_common.sh@45 -- # return 0 00:31:20.871 09:00:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:20.871 09:00:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:21.128 09:00:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:21.128 09:00:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:21.128 09:00:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:21.128 09:00:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:21.128 09:00:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:21.128 09:00:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:21.128 09:00:23 -- bdev/nbd_common.sh@41 -- # break 00:31:21.128 09:00:23 -- bdev/nbd_common.sh@45 -- # return 0 00:31:21.128 09:00:23 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:21.128 09:00:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:21.128 09:00:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:21.386 09:00:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:21.386 09:00:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:21.386 09:00:23 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:21.386 09:00:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:21.386 09:00:23 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:21.386 09:00:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:21.386 09:00:23 -- bdev/nbd_common.sh@65 -- # true 00:31:21.386 09:00:23 -- bdev/nbd_common.sh@65 -- # count=0 00:31:21.386 09:00:23 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:21.386 09:00:23 -- bdev/nbd_common.sh@104 -- # count=0 00:31:21.386 09:00:23 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:21.386 09:00:23 -- bdev/nbd_common.sh@109 -- # return 0 00:31:21.386 09:00:23 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:31:21.961 09:00:23 -- event/event.sh@35 -- # sleep 3 00:31:23.859 [2024-04-18 09:00:25.468935] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:23.859 [2024-04-18 09:00:25.737050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.859 [2024-04-18 09:00:25.737053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.127 [2024-04-18 09:00:26.023276] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:31:24.127 [2024-04-18 09:00:26.023402] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:31:25.062 09:00:26 -- event/event.sh@23 -- # for i in {0..2} 00:31:25.062 09:00:26 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:31:25.062 spdk_app_start Round 1 00:31:25.062 09:00:26 -- event/event.sh@25 -- # waitforlisten 63083 /var/tmp/spdk-nbd.sock 00:31:25.062 09:00:26 -- common/autotest_common.sh@817 -- # '[' -z 63083 ']' 00:31:25.062 09:00:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:25.062 09:00:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:25.062 09:00:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:25.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:25.062 09:00:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:25.062 09:00:26 -- common/autotest_common.sh@10 -- # set +x 00:31:25.062 09:00:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:25.062 09:00:27 -- common/autotest_common.sh@850 -- # return 0 00:31:25.062 09:00:27 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:31:25.320 Malloc0 00:31:25.578 09:00:27 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:31:25.836 Malloc1 00:31:25.836 09:00:27 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:31:25.836 09:00:27 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:25.836 09:00:27 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:31:25.836 09:00:27 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:25.836 09:00:27 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:25.836 09:00:27 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:25.836 09:00:27 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:31:25.836 09:00:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:25.836 09:00:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:31:25.836 09:00:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:25.836 09:00:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:25.836 09:00:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:25.836 09:00:27 -- bdev/nbd_common.sh@12 -- # local i 00:31:25.836 09:00:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:25.836 09:00:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:25.836 09:00:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:31:26.094 /dev/nbd0 00:31:26.094 09:00:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:26.094 09:00:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:26.094 09:00:28 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:31:26.094 09:00:28 -- common/autotest_common.sh@855 -- # local i 00:31:26.094 09:00:28 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:26.094 09:00:28 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:26.094 09:00:28 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:31:26.094 09:00:28 -- common/autotest_common.sh@859 -- # break 00:31:26.094 09:00:28 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:26.094 09:00:28 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:26.094 09:00:28 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:31:26.094 1+0 records in 00:31:26.094 1+0 records out 00:31:26.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396667 s, 10.3 MB/s 00:31:26.094 09:00:28 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:26.094 09:00:28 -- common/autotest_common.sh@872 -- # size=4096 00:31:26.094 09:00:28 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:26.094 09:00:28 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:26.094 09:00:28 -- common/autotest_common.sh@875 -- # return 0 00:31:26.094 09:00:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:26.094 09:00:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:26.094 09:00:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:31:26.352 /dev/nbd1 00:31:26.352 09:00:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:26.352 09:00:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:26.352 09:00:28 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:31:26.352 09:00:28 -- common/autotest_common.sh@855 -- # local i 00:31:26.352 09:00:28 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:26.352 09:00:28 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:26.352 09:00:28 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:31:26.352 09:00:28 -- common/autotest_common.sh@859 -- # break 00:31:26.352 09:00:28 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:26.352 09:00:28 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:26.352 09:00:28 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:31:26.352 1+0 records in 00:31:26.352 1+0 records out 00:31:26.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273182 s, 15.0 MB/s 00:31:26.352 09:00:28 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:26.352 09:00:28 -- common/autotest_common.sh@872 -- # size=4096 00:31:26.352 09:00:28 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:26.352 09:00:28 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:26.352 09:00:28 -- common/autotest_common.sh@875 -- # return 0 00:31:26.352 09:00:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:26.352 09:00:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:26.352 09:00:28 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:26.352 09:00:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:26.352 09:00:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:26.940 09:00:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:26.940 { 00:31:26.940 "nbd_device": "/dev/nbd0", 00:31:26.940 "bdev_name": "Malloc0" 00:31:26.940 }, 00:31:26.940 { 00:31:26.940 "nbd_device": "/dev/nbd1", 00:31:26.940 "bdev_name": "Malloc1" 00:31:26.940 } 00:31:26.940 ]' 00:31:26.940 09:00:28 -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:26.940 { 00:31:26.940 "nbd_device": "/dev/nbd0", 00:31:26.940 "bdev_name": "Malloc0" 00:31:26.940 }, 00:31:26.940 { 00:31:26.940 "nbd_device": "/dev/nbd1", 00:31:26.941 "bdev_name": "Malloc1" 00:31:26.941 } 00:31:26.941 ]' 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:31:26.941 /dev/nbd1' 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:31:26.941 /dev/nbd1' 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@65 -- # count=2 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@66 -- # echo 2 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@95 -- # count=2 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:31:26.941 256+0 records in 00:31:26.941 256+0 records out 00:31:26.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00680764 s, 154 MB/s 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:26.941 256+0 records in 00:31:26.941 256+0 records out 00:31:26.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0328445 s, 31.9 MB/s 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:31:26.941 256+0 records in 00:31:26.941 256+0 records out 00:31:26.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297764 s, 35.2 MB/s 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@51 -- # local i 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:26.941 09:00:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:27.199 09:00:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:27.199 09:00:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:27.199 09:00:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:27.199 09:00:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:27.199 09:00:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:27.199 09:00:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:27.199 09:00:29 -- bdev/nbd_common.sh@41 -- # break 00:31:27.199 09:00:29 -- bdev/nbd_common.sh@45 -- # return 0 00:31:27.199 09:00:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:27.199 09:00:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:27.457 09:00:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:27.457 09:00:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:27.457 09:00:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:27.457 09:00:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:27.457 09:00:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:27.457 09:00:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:27.457 09:00:29 -- bdev/nbd_common.sh@41 -- # break 00:31:27.457 09:00:29 -- bdev/nbd_common.sh@45 -- # return 0 00:31:27.457 09:00:29 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:27.457 09:00:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:27.457 09:00:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:27.715 09:00:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:27.715 09:00:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:27.715 09:00:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:27.715 09:00:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:27.715 09:00:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:27.715 09:00:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:27.715 09:00:29 -- bdev/nbd_common.sh@65 -- # true 00:31:27.715 09:00:29 -- bdev/nbd_common.sh@65 -- # count=0 00:31:27.715 09:00:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:27.715 09:00:29 -- bdev/nbd_common.sh@104 -- # count=0 00:31:27.715 09:00:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:27.715 09:00:29 -- bdev/nbd_common.sh@109 -- # return 0 00:31:27.715 09:00:29 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:31:28.281 09:00:30 -- event/event.sh@35 -- # sleep 3 00:31:30.264 [2024-04-18 09:00:31.893452] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:30.264 [2024-04-18 09:00:32.165567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.264 [2024-04-18 09:00:32.165597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.523 [2024-04-18 09:00:32.446470] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:31:30.523 [2024-04-18 09:00:32.446550] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:31:31.459 09:00:33 -- event/event.sh@23 -- # for i in {0..2} 00:31:31.459 09:00:33 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:31:31.459 spdk_app_start Round 2 00:31:31.459 09:00:33 -- event/event.sh@25 -- # waitforlisten 63083 /var/tmp/spdk-nbd.sock 00:31:31.459 09:00:33 -- common/autotest_common.sh@817 -- # '[' -z 63083 ']' 00:31:31.459 09:00:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:31.459 09:00:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:31.459 09:00:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:31.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:31.459 09:00:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:31.459 09:00:33 -- common/autotest_common.sh@10 -- # set +x 00:31:31.717 09:00:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:31.717 09:00:33 -- common/autotest_common.sh@850 -- # return 0 00:31:31.717 09:00:33 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:31:31.977 Malloc0 00:31:31.977 09:00:33 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:31:32.235 Malloc1 00:31:32.235 09:00:34 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:31:32.235 09:00:34 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:32.235 09:00:34 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:31:32.235 09:00:34 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:32.235 09:00:34 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:32.235 09:00:34 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:32.235 09:00:34 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:31:32.235 09:00:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:32.235 09:00:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:31:32.235 09:00:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:32.235 09:00:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:32.235 09:00:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:32.235 09:00:34 -- bdev/nbd_common.sh@12 -- # local i 00:31:32.235 09:00:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:32.235 09:00:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:32.235 09:00:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:31:32.494 /dev/nbd0 00:31:32.494 09:00:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:32.494 09:00:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:32.494 09:00:34 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:31:32.494 09:00:34 -- common/autotest_common.sh@855 -- # local i 00:31:32.494 09:00:34 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:32.494 09:00:34 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:32.494 09:00:34 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:31:32.494 09:00:34 -- common/autotest_common.sh@859 -- # break 00:31:32.494 09:00:34 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:32.494 09:00:34 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:32.494 09:00:34 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:31:32.752 1+0 records in 00:31:32.752 1+0 records out 00:31:32.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224716 s, 18.2 MB/s 00:31:32.752 09:00:34 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:32.752 09:00:34 -- common/autotest_common.sh@872 -- # size=4096 00:31:32.752 09:00:34 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:32.752 09:00:34 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:32.752 09:00:34 -- common/autotest_common.sh@875 -- # return 0 00:31:32.752 09:00:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:32.752 09:00:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:32.752 09:00:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:31:33.011 /dev/nbd1 00:31:33.011 09:00:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:33.011 09:00:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:33.011 09:00:34 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:31:33.011 09:00:34 -- common/autotest_common.sh@855 -- # local i 00:31:33.011 09:00:34 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:33.011 09:00:34 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:33.011 09:00:34 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:31:33.011 09:00:34 -- common/autotest_common.sh@859 -- # break 00:31:33.011 09:00:34 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:33.011 09:00:34 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:33.011 09:00:34 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:31:33.011 1+0 records in 00:31:33.011 1+0 records out 00:31:33.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381554 s, 10.7 MB/s 00:31:33.011 09:00:34 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:33.011 09:00:34 -- common/autotest_common.sh@872 -- # size=4096 00:31:33.011 09:00:34 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:33.011 09:00:34 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:33.011 09:00:34 -- common/autotest_common.sh@875 -- # return 0 00:31:33.011 09:00:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:33.011 09:00:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:33.011 09:00:34 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:33.011 09:00:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:33.011 09:00:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:33.280 { 00:31:33.280 "nbd_device": "/dev/nbd0", 00:31:33.280 "bdev_name": "Malloc0" 00:31:33.280 }, 00:31:33.280 { 00:31:33.280 "nbd_device": "/dev/nbd1", 00:31:33.280 "bdev_name": "Malloc1" 00:31:33.280 } 00:31:33.280 ]' 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:33.280 { 00:31:33.280 "nbd_device": "/dev/nbd0", 00:31:33.280 "bdev_name": "Malloc0" 00:31:33.280 }, 00:31:33.280 { 00:31:33.280 "nbd_device": "/dev/nbd1", 00:31:33.280 "bdev_name": "Malloc1" 00:31:33.280 } 00:31:33.280 ]' 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:31:33.280 /dev/nbd1' 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:31:33.280 /dev/nbd1' 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@65 -- # count=2 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@66 -- # echo 2 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@95 -- # count=2 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:31:33.280 256+0 records in 00:31:33.280 256+0 records out 00:31:33.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116302 s, 90.2 MB/s 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:33.280 256+0 records in 00:31:33.280 256+0 records out 00:31:33.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309859 s, 33.8 MB/s 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:31:33.280 256+0 records in 00:31:33.280 256+0 records out 00:31:33.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280738 s, 37.4 MB/s 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@51 -- # local i 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:33.280 09:00:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:33.847 09:00:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:33.847 09:00:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:33.847 09:00:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:33.847 09:00:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:33.847 09:00:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:33.847 09:00:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:33.847 09:00:35 -- bdev/nbd_common.sh@41 -- # break 00:31:33.847 09:00:35 -- bdev/nbd_common.sh@45 -- # return 0 00:31:33.847 09:00:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:33.847 09:00:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:34.105 09:00:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:34.105 09:00:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:34.105 09:00:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:34.105 09:00:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:34.105 09:00:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:34.105 09:00:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:34.105 09:00:36 -- bdev/nbd_common.sh@41 -- # break 00:31:34.105 09:00:36 -- bdev/nbd_common.sh@45 -- # return 0 00:31:34.105 09:00:36 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:34.105 09:00:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:34.105 09:00:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:34.363 09:00:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:34.364 09:00:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:34.364 09:00:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:34.364 09:00:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:34.364 09:00:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:34.364 09:00:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:34.364 09:00:36 -- bdev/nbd_common.sh@65 -- # true 00:31:34.364 09:00:36 -- bdev/nbd_common.sh@65 -- # count=0 00:31:34.364 09:00:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:34.364 09:00:36 -- bdev/nbd_common.sh@104 -- # count=0 00:31:34.364 09:00:36 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:34.364 09:00:36 -- bdev/nbd_common.sh@109 -- # return 0 00:31:34.364 09:00:36 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:31:34.932 09:00:36 -- event/event.sh@35 -- # sleep 3 00:31:36.306 [2024-04-18 09:00:38.339382] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:36.590 [2024-04-18 09:00:38.598434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.590 [2024-04-18 09:00:38.598442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.856 [2024-04-18 09:00:38.859923] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:31:36.856 [2024-04-18 09:00:38.860003] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:31:37.793 09:00:39 -- event/event.sh@38 -- # waitforlisten 63083 /var/tmp/spdk-nbd.sock 00:31:37.793 09:00:39 -- common/autotest_common.sh@817 -- # '[' -z 63083 ']' 00:31:37.793 09:00:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:37.793 09:00:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:37.793 09:00:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:37.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:37.793 09:00:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:37.793 09:00:39 -- common/autotest_common.sh@10 -- # set +x 00:31:38.052 09:00:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:38.052 09:00:40 -- common/autotest_common.sh@850 -- # return 0 00:31:38.052 09:00:40 -- event/event.sh@39 -- # killprocess 63083 00:31:38.052 09:00:40 -- common/autotest_common.sh@936 -- # '[' -z 63083 ']' 00:31:38.052 09:00:40 -- common/autotest_common.sh@940 -- # kill -0 63083 00:31:38.052 09:00:40 -- common/autotest_common.sh@941 -- # uname 00:31:38.052 09:00:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:38.052 09:00:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63083 00:31:38.052 09:00:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:38.052 09:00:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:38.052 09:00:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63083' 00:31:38.052 killing process with pid 63083 00:31:38.052 09:00:40 -- common/autotest_common.sh@955 -- # kill 63083 00:31:38.052 09:00:40 -- common/autotest_common.sh@960 -- # wait 63083 00:31:39.430 spdk_app_start is called in Round 0. 00:31:39.430 Shutdown signal received, stop current app iteration 00:31:39.430 Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 reinitialization... 00:31:39.430 spdk_app_start is called in Round 1. 00:31:39.430 Shutdown signal received, stop current app iteration 00:31:39.430 Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 reinitialization... 00:31:39.430 spdk_app_start is called in Round 2. 00:31:39.430 Shutdown signal received, stop current app iteration 00:31:39.430 Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 reinitialization... 00:31:39.430 spdk_app_start is called in Round 3. 00:31:39.430 Shutdown signal received, stop current app iteration 00:31:39.430 09:00:41 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:31:39.430 09:00:41 -- event/event.sh@42 -- # return 0 00:31:39.430 00:31:39.430 real 0m21.543s 00:31:39.430 user 0m44.798s 00:31:39.430 sys 0m3.479s 00:31:39.430 09:00:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:39.430 09:00:41 -- common/autotest_common.sh@10 -- # set +x 00:31:39.430 ************************************ 00:31:39.430 END TEST app_repeat 00:31:39.430 ************************************ 00:31:39.430 09:00:41 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:31:39.430 09:00:41 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:31:39.430 09:00:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:39.430 09:00:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:39.430 09:00:41 -- common/autotest_common.sh@10 -- # set +x 00:31:39.689 ************************************ 00:31:39.689 START TEST cpu_locks 00:31:39.689 ************************************ 00:31:39.689 09:00:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:31:39.689 * Looking for test storage... 00:31:39.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:31:39.689 09:00:41 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:31:39.689 09:00:41 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:31:39.689 09:00:41 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:31:39.689 09:00:41 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:31:39.689 09:00:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:39.689 09:00:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:39.689 09:00:41 -- common/autotest_common.sh@10 -- # set +x 00:31:39.689 ************************************ 00:31:39.689 START TEST default_locks 00:31:39.689 ************************************ 00:31:39.689 09:00:41 -- common/autotest_common.sh@1111 -- # default_locks 00:31:39.689 09:00:41 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:31:39.689 09:00:41 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=63559 00:31:39.689 09:00:41 -- event/cpu_locks.sh@47 -- # waitforlisten 63559 00:31:39.689 09:00:41 -- common/autotest_common.sh@817 -- # '[' -z 63559 ']' 00:31:39.689 09:00:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.689 09:00:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:39.689 09:00:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.689 09:00:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:39.690 09:00:41 -- common/autotest_common.sh@10 -- # set +x 00:31:39.966 [2024-04-18 09:00:41.908171] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:31:39.966 [2024-04-18 09:00:41.908333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63559 ] 00:31:40.225 [2024-04-18 09:00:42.093049] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.483 [2024-04-18 09:00:42.406421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.417 09:00:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:41.417 09:00:43 -- common/autotest_common.sh@850 -- # return 0 00:31:41.417 09:00:43 -- event/cpu_locks.sh@49 -- # locks_exist 63559 00:31:41.417 09:00:43 -- event/cpu_locks.sh@22 -- # lslocks -p 63559 00:31:41.417 09:00:43 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:31:41.985 09:00:43 -- event/cpu_locks.sh@50 -- # killprocess 63559 00:31:41.985 09:00:43 -- common/autotest_common.sh@936 -- # '[' -z 63559 ']' 00:31:41.985 09:00:43 -- common/autotest_common.sh@940 -- # kill -0 63559 00:31:41.985 09:00:43 -- common/autotest_common.sh@941 -- # uname 00:31:41.985 09:00:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:41.985 09:00:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63559 00:31:41.985 killing process with pid 63559 00:31:41.985 09:00:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:41.985 09:00:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:41.985 09:00:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63559' 00:31:41.985 09:00:43 -- common/autotest_common.sh@955 -- # kill 63559 00:31:41.985 09:00:43 -- common/autotest_common.sh@960 -- # wait 63559 00:31:44.519 09:00:46 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 63559 00:31:44.519 09:00:46 -- common/autotest_common.sh@638 -- # local es=0 00:31:44.519 09:00:46 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 63559 00:31:44.519 09:00:46 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:31:44.519 09:00:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:44.519 09:00:46 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:31:44.519 09:00:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:44.519 09:00:46 -- common/autotest_common.sh@641 -- # waitforlisten 63559 00:31:44.519 09:00:46 -- common/autotest_common.sh@817 -- # '[' -z 63559 ']' 00:31:44.519 09:00:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.519 09:00:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:44.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.519 09:00:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.519 09:00:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:44.519 09:00:46 -- common/autotest_common.sh@10 -- # set +x 00:31:44.519 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (63559) - No such process 00:31:44.519 ERROR: process (pid: 63559) is no longer running 00:31:44.519 09:00:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:44.519 09:00:46 -- common/autotest_common.sh@850 -- # return 1 00:31:44.519 09:00:46 -- common/autotest_common.sh@641 -- # es=1 00:31:44.519 09:00:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:44.519 09:00:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:44.519 09:00:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:44.519 09:00:46 -- event/cpu_locks.sh@54 -- # no_locks 00:31:44.519 09:00:46 -- event/cpu_locks.sh@26 -- # lock_files=() 00:31:44.519 09:00:46 -- event/cpu_locks.sh@26 -- # local lock_files 00:31:44.519 09:00:46 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:31:44.519 00:31:44.519 real 0m4.827s 00:31:44.519 user 0m4.850s 00:31:44.519 sys 0m0.750s 00:31:44.519 09:00:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:44.519 09:00:46 -- common/autotest_common.sh@10 -- # set +x 00:31:44.519 ************************************ 00:31:44.519 END TEST default_locks 00:31:44.519 ************************************ 00:31:44.778 09:00:46 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:31:44.778 09:00:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:44.779 09:00:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:44.779 09:00:46 -- common/autotest_common.sh@10 -- # set +x 00:31:44.779 ************************************ 00:31:44.779 START TEST default_locks_via_rpc 00:31:44.779 ************************************ 00:31:44.779 09:00:46 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:31:44.779 09:00:46 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=63645 00:31:44.779 09:00:46 -- event/cpu_locks.sh@63 -- # waitforlisten 63645 00:31:44.779 09:00:46 -- common/autotest_common.sh@817 -- # '[' -z 63645 ']' 00:31:44.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.779 09:00:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.779 09:00:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:44.779 09:00:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.779 09:00:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:44.779 09:00:46 -- common/autotest_common.sh@10 -- # set +x 00:31:44.779 09:00:46 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:31:45.037 [2024-04-18 09:00:46.880249] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:31:45.037 [2024-04-18 09:00:46.880433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63645 ] 00:31:45.037 [2024-04-18 09:00:47.068232] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.295 [2024-04-18 09:00:47.377462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.668 09:00:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:46.668 09:00:48 -- common/autotest_common.sh@850 -- # return 0 00:31:46.668 09:00:48 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:31:46.668 09:00:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.668 09:00:48 -- common/autotest_common.sh@10 -- # set +x 00:31:46.668 09:00:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.668 09:00:48 -- event/cpu_locks.sh@67 -- # no_locks 00:31:46.668 09:00:48 -- event/cpu_locks.sh@26 -- # lock_files=() 00:31:46.668 09:00:48 -- event/cpu_locks.sh@26 -- # local lock_files 00:31:46.668 09:00:48 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:31:46.668 09:00:48 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:31:46.668 09:00:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.668 09:00:48 -- common/autotest_common.sh@10 -- # set +x 00:31:46.668 09:00:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.668 09:00:48 -- event/cpu_locks.sh@71 -- # locks_exist 63645 00:31:46.668 09:00:48 -- event/cpu_locks.sh@22 -- # lslocks -p 63645 00:31:46.668 09:00:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:31:46.926 09:00:48 -- event/cpu_locks.sh@73 -- # killprocess 63645 00:31:46.926 09:00:48 -- common/autotest_common.sh@936 -- # '[' -z 63645 ']' 00:31:46.926 09:00:48 -- common/autotest_common.sh@940 -- # kill -0 63645 00:31:46.926 09:00:48 -- common/autotest_common.sh@941 -- # uname 00:31:46.926 09:00:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:46.926 09:00:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63645 00:31:46.926 09:00:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:46.926 09:00:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:46.926 killing process with pid 63645 00:31:46.926 09:00:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63645' 00:31:46.926 09:00:48 -- common/autotest_common.sh@955 -- # kill 63645 00:31:46.926 09:00:48 -- common/autotest_common.sh@960 -- # wait 63645 00:31:50.209 00:31:50.209 real 0m4.858s 00:31:50.209 user 0m4.846s 00:31:50.209 sys 0m0.773s 00:31:50.209 09:00:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:50.209 ************************************ 00:31:50.209 END TEST default_locks_via_rpc 00:31:50.209 ************************************ 00:31:50.209 09:00:51 -- common/autotest_common.sh@10 -- # set +x 00:31:50.209 09:00:51 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:31:50.209 09:00:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:50.209 09:00:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:50.210 09:00:51 -- common/autotest_common.sh@10 -- # set +x 00:31:50.210 ************************************ 00:31:50.210 START TEST non_locking_app_on_locked_coremask 00:31:50.210 ************************************ 00:31:50.210 09:00:51 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:31:50.210 09:00:51 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63728 00:31:50.210 09:00:51 -- event/cpu_locks.sh@81 -- # waitforlisten 63728 /var/tmp/spdk.sock 00:31:50.210 09:00:51 -- common/autotest_common.sh@817 -- # '[' -z 63728 ']' 00:31:50.210 09:00:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.210 09:00:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:50.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:50.210 09:00:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.210 09:00:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:50.210 09:00:51 -- common/autotest_common.sh@10 -- # set +x 00:31:50.210 09:00:51 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:31:50.210 [2024-04-18 09:00:51.891436] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:31:50.210 [2024-04-18 09:00:51.891653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63728 ] 00:31:50.210 [2024-04-18 09:00:52.082061] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.467 [2024-04-18 09:00:52.394236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.398 09:00:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:51.398 09:00:53 -- common/autotest_common.sh@850 -- # return 0 00:31:51.398 09:00:53 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:31:51.398 09:00:53 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63755 00:31:51.398 09:00:53 -- event/cpu_locks.sh@85 -- # waitforlisten 63755 /var/tmp/spdk2.sock 00:31:51.398 09:00:53 -- common/autotest_common.sh@817 -- # '[' -z 63755 ']' 00:31:51.398 09:00:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:31:51.398 09:00:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:51.398 09:00:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:31:51.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:31:51.398 09:00:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:51.398 09:00:53 -- common/autotest_common.sh@10 -- # set +x 00:31:51.656 [2024-04-18 09:00:53.603746] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:31:51.656 [2024-04-18 09:00:53.604116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63755 ] 00:31:51.914 [2024-04-18 09:00:53.781035] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:31:51.914 [2024-04-18 09:00:53.781145] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.481 [2024-04-18 09:00:54.285306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.383 09:00:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:54.383 09:00:56 -- common/autotest_common.sh@850 -- # return 0 00:31:54.383 09:00:56 -- event/cpu_locks.sh@87 -- # locks_exist 63728 00:31:54.383 09:00:56 -- event/cpu_locks.sh@22 -- # lslocks -p 63728 00:31:54.383 09:00:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:31:55.315 09:00:57 -- event/cpu_locks.sh@89 -- # killprocess 63728 00:31:55.315 09:00:57 -- common/autotest_common.sh@936 -- # '[' -z 63728 ']' 00:31:55.315 09:00:57 -- common/autotest_common.sh@940 -- # kill -0 63728 00:31:55.315 09:00:57 -- common/autotest_common.sh@941 -- # uname 00:31:55.315 09:00:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:55.315 09:00:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63728 00:31:55.315 09:00:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:55.315 killing process with pid 63728 00:31:55.315 09:00:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:55.315 09:00:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63728' 00:31:55.315 09:00:57 -- common/autotest_common.sh@955 -- # kill 63728 00:31:55.315 09:00:57 -- common/autotest_common.sh@960 -- # wait 63728 00:32:00.626 09:01:02 -- event/cpu_locks.sh@90 -- # killprocess 63755 00:32:00.626 09:01:02 -- common/autotest_common.sh@936 -- # '[' -z 63755 ']' 00:32:00.626 09:01:02 -- common/autotest_common.sh@940 -- # kill -0 63755 00:32:00.626 09:01:02 -- common/autotest_common.sh@941 -- # uname 00:32:00.626 09:01:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:00.626 09:01:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63755 00:32:00.626 09:01:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:00.626 09:01:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:00.626 09:01:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63755' 00:32:00.626 killing process with pid 63755 00:32:00.626 09:01:02 -- common/autotest_common.sh@955 -- # kill 63755 00:32:00.626 09:01:02 -- common/autotest_common.sh@960 -- # wait 63755 00:32:03.925 00:32:03.925 real 0m13.575s 00:32:03.925 user 0m14.116s 00:32:03.925 sys 0m1.570s 00:32:03.925 09:01:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:03.925 ************************************ 00:32:03.925 END TEST non_locking_app_on_locked_coremask 00:32:03.925 ************************************ 00:32:03.925 09:01:05 -- common/autotest_common.sh@10 -- # set +x 00:32:03.925 09:01:05 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:32:03.925 09:01:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:03.925 09:01:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:03.925 09:01:05 -- common/autotest_common.sh@10 -- # set +x 00:32:03.925 ************************************ 00:32:03.925 START TEST locking_app_on_unlocked_coremask 00:32:03.925 ************************************ 00:32:03.925 09:01:05 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:32:03.925 09:01:05 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:32:03.925 09:01:05 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63924 00:32:03.925 09:01:05 -- event/cpu_locks.sh@99 -- # waitforlisten 63924 /var/tmp/spdk.sock 00:32:03.925 09:01:05 -- common/autotest_common.sh@817 -- # '[' -z 63924 ']' 00:32:03.925 09:01:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:03.925 09:01:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:03.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:03.925 09:01:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:03.925 09:01:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:03.925 09:01:05 -- common/autotest_common.sh@10 -- # set +x 00:32:03.925 [2024-04-18 09:01:05.578709] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:32:03.925 [2024-04-18 09:01:05.578883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63924 ] 00:32:03.925 [2024-04-18 09:01:05.761786] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:32:03.925 [2024-04-18 09:01:05.761886] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.925 [2024-04-18 09:01:06.008399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.303 09:01:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:05.303 09:01:07 -- common/autotest_common.sh@850 -- # return 0 00:32:05.303 09:01:07 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63940 00:32:05.303 09:01:07 -- event/cpu_locks.sh@103 -- # waitforlisten 63940 /var/tmp/spdk2.sock 00:32:05.303 09:01:07 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:32:05.303 09:01:07 -- common/autotest_common.sh@817 -- # '[' -z 63940 ']' 00:32:05.303 09:01:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:32:05.303 09:01:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:05.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:32:05.303 09:01:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:32:05.303 09:01:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:05.303 09:01:07 -- common/autotest_common.sh@10 -- # set +x 00:32:05.303 [2024-04-18 09:01:07.132492] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:32:05.303 [2024-04-18 09:01:07.132641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63940 ] 00:32:05.303 [2024-04-18 09:01:07.293616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.868 [2024-04-18 09:01:07.791686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.767 09:01:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:07.768 09:01:09 -- common/autotest_common.sh@850 -- # return 0 00:32:07.768 09:01:09 -- event/cpu_locks.sh@105 -- # locks_exist 63940 00:32:08.025 09:01:09 -- event/cpu_locks.sh@22 -- # lslocks -p 63940 00:32:08.025 09:01:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:32:08.960 09:01:10 -- event/cpu_locks.sh@107 -- # killprocess 63924 00:32:08.960 09:01:10 -- common/autotest_common.sh@936 -- # '[' -z 63924 ']' 00:32:08.960 09:01:10 -- common/autotest_common.sh@940 -- # kill -0 63924 00:32:08.960 09:01:10 -- common/autotest_common.sh@941 -- # uname 00:32:08.960 09:01:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:08.960 09:01:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63924 00:32:08.960 09:01:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:08.960 killing process with pid 63924 00:32:08.961 09:01:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:08.961 09:01:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63924' 00:32:08.961 09:01:10 -- common/autotest_common.sh@955 -- # kill 63924 00:32:08.961 09:01:10 -- common/autotest_common.sh@960 -- # wait 63924 00:32:14.228 09:01:16 -- event/cpu_locks.sh@108 -- # killprocess 63940 00:32:14.228 09:01:16 -- common/autotest_common.sh@936 -- # '[' -z 63940 ']' 00:32:14.228 09:01:16 -- common/autotest_common.sh@940 -- # kill -0 63940 00:32:14.228 09:01:16 -- common/autotest_common.sh@941 -- # uname 00:32:14.228 09:01:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:14.228 09:01:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63940 00:32:14.228 09:01:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:14.228 killing process with pid 63940 00:32:14.228 09:01:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:14.228 09:01:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63940' 00:32:14.228 09:01:16 -- common/autotest_common.sh@955 -- # kill 63940 00:32:14.228 09:01:16 -- common/autotest_common.sh@960 -- # wait 63940 00:32:17.512 00:32:17.512 real 0m13.574s 00:32:17.512 user 0m14.042s 00:32:17.512 sys 0m1.532s 00:32:17.512 09:01:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:17.512 ************************************ 00:32:17.512 END TEST locking_app_on_unlocked_coremask 00:32:17.512 ************************************ 00:32:17.512 09:01:19 -- common/autotest_common.sh@10 -- # set +x 00:32:17.512 09:01:19 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:32:17.512 09:01:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:17.512 09:01:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:17.512 09:01:19 -- common/autotest_common.sh@10 -- # set +x 00:32:17.512 ************************************ 00:32:17.512 START TEST locking_app_on_locked_coremask 00:32:17.512 ************************************ 00:32:17.512 09:01:19 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:32:17.512 09:01:19 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:32:17.512 09:01:19 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64114 00:32:17.512 09:01:19 -- event/cpu_locks.sh@116 -- # waitforlisten 64114 /var/tmp/spdk.sock 00:32:17.512 09:01:19 -- common/autotest_common.sh@817 -- # '[' -z 64114 ']' 00:32:17.512 09:01:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.512 09:01:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:17.512 09:01:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.512 09:01:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:17.512 09:01:19 -- common/autotest_common.sh@10 -- # set +x 00:32:17.512 [2024-04-18 09:01:19.268478] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:32:17.512 [2024-04-18 09:01:19.268888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64114 ] 00:32:17.512 [2024-04-18 09:01:19.449893] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.770 [2024-04-18 09:01:19.754644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.705 09:01:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:18.705 09:01:20 -- common/autotest_common.sh@850 -- # return 0 00:32:18.705 09:01:20 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:32:18.705 09:01:20 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64136 00:32:18.705 09:01:20 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64136 /var/tmp/spdk2.sock 00:32:18.705 09:01:20 -- common/autotest_common.sh@638 -- # local es=0 00:32:18.705 09:01:20 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 64136 /var/tmp/spdk2.sock 00:32:18.705 09:01:20 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:32:18.705 09:01:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:18.705 09:01:20 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:32:18.705 09:01:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:18.705 09:01:20 -- common/autotest_common.sh@641 -- # waitforlisten 64136 /var/tmp/spdk2.sock 00:32:18.705 09:01:20 -- common/autotest_common.sh@817 -- # '[' -z 64136 ']' 00:32:18.705 09:01:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:32:18.705 09:01:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:18.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:32:18.705 09:01:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:32:18.705 09:01:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:18.705 09:01:20 -- common/autotest_common.sh@10 -- # set +x 00:32:18.963 [2024-04-18 09:01:20.921392] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:32:18.963 [2024-04-18 09:01:20.922111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64136 ] 00:32:19.221 [2024-04-18 09:01:21.113281] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64114 has claimed it. 00:32:19.221 [2024-04-18 09:01:21.113386] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:32:19.484 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (64136) - No such process 00:32:19.484 ERROR: process (pid: 64136) is no longer running 00:32:19.484 09:01:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:19.484 09:01:21 -- common/autotest_common.sh@850 -- # return 1 00:32:19.484 09:01:21 -- common/autotest_common.sh@641 -- # es=1 00:32:19.484 09:01:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:19.484 09:01:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:19.484 09:01:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:19.484 09:01:21 -- event/cpu_locks.sh@122 -- # locks_exist 64114 00:32:19.484 09:01:21 -- event/cpu_locks.sh@22 -- # lslocks -p 64114 00:32:19.484 09:01:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:32:20.050 09:01:21 -- event/cpu_locks.sh@124 -- # killprocess 64114 00:32:20.050 09:01:21 -- common/autotest_common.sh@936 -- # '[' -z 64114 ']' 00:32:20.050 09:01:21 -- common/autotest_common.sh@940 -- # kill -0 64114 00:32:20.051 09:01:21 -- common/autotest_common.sh@941 -- # uname 00:32:20.051 09:01:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:20.051 09:01:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64114 00:32:20.051 09:01:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:20.051 09:01:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:20.051 killing process with pid 64114 00:32:20.051 09:01:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64114' 00:32:20.051 09:01:21 -- common/autotest_common.sh@955 -- # kill 64114 00:32:20.051 09:01:21 -- common/autotest_common.sh@960 -- # wait 64114 00:32:22.607 00:32:22.607 real 0m5.501s 00:32:22.607 user 0m5.738s 00:32:22.607 sys 0m0.930s 00:32:22.607 09:01:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:22.607 ************************************ 00:32:22.607 END TEST locking_app_on_locked_coremask 00:32:22.607 09:01:24 -- common/autotest_common.sh@10 -- # set +x 00:32:22.607 ************************************ 00:32:22.867 09:01:24 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:32:22.867 09:01:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:22.867 09:01:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:22.867 09:01:24 -- common/autotest_common.sh@10 -- # set +x 00:32:22.867 ************************************ 00:32:22.867 START TEST locking_overlapped_coremask 00:32:22.867 ************************************ 00:32:22.867 09:01:24 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:32:22.867 09:01:24 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64215 00:32:22.867 09:01:24 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:32:22.867 09:01:24 -- event/cpu_locks.sh@133 -- # waitforlisten 64215 /var/tmp/spdk.sock 00:32:22.867 09:01:24 -- common/autotest_common.sh@817 -- # '[' -z 64215 ']' 00:32:22.867 09:01:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.867 09:01:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:22.867 09:01:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.867 09:01:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:22.867 09:01:24 -- common/autotest_common.sh@10 -- # set +x 00:32:22.867 [2024-04-18 09:01:24.940231] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:32:22.867 [2024-04-18 09:01:24.940676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64215 ] 00:32:23.126 [2024-04-18 09:01:25.125733] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:23.386 [2024-04-18 09:01:25.385979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.386 [2024-04-18 09:01:25.386059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.386 [2024-04-18 09:01:25.386087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:24.760 09:01:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:24.760 09:01:26 -- common/autotest_common.sh@850 -- # return 0 00:32:24.760 09:01:26 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64237 00:32:24.760 09:01:26 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64237 /var/tmp/spdk2.sock 00:32:24.760 09:01:26 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:32:24.760 09:01:26 -- common/autotest_common.sh@638 -- # local es=0 00:32:24.760 09:01:26 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 64237 /var/tmp/spdk2.sock 00:32:24.760 09:01:26 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:32:24.760 09:01:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:24.760 09:01:26 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:32:24.760 09:01:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:24.760 09:01:26 -- common/autotest_common.sh@641 -- # waitforlisten 64237 /var/tmp/spdk2.sock 00:32:24.760 09:01:26 -- common/autotest_common.sh@817 -- # '[' -z 64237 ']' 00:32:24.760 09:01:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:32:24.760 09:01:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:24.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:32:24.760 09:01:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:32:24.760 09:01:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:24.760 09:01:26 -- common/autotest_common.sh@10 -- # set +x 00:32:24.760 [2024-04-18 09:01:26.536942] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:32:24.760 [2024-04-18 09:01:26.537083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64237 ] 00:32:24.760 [2024-04-18 09:01:26.716094] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64215 has claimed it. 00:32:24.760 [2024-04-18 09:01:26.716192] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:32:25.328 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (64237) - No such process 00:32:25.328 ERROR: process (pid: 64237) is no longer running 00:32:25.328 09:01:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:25.328 09:01:27 -- common/autotest_common.sh@850 -- # return 1 00:32:25.328 09:01:27 -- common/autotest_common.sh@641 -- # es=1 00:32:25.328 09:01:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:25.328 09:01:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:25.328 09:01:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:25.328 09:01:27 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:32:25.328 09:01:27 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:32:25.328 09:01:27 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:32:25.328 09:01:27 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:32:25.328 09:01:27 -- event/cpu_locks.sh@141 -- # killprocess 64215 00:32:25.328 09:01:27 -- common/autotest_common.sh@936 -- # '[' -z 64215 ']' 00:32:25.328 09:01:27 -- common/autotest_common.sh@940 -- # kill -0 64215 00:32:25.328 09:01:27 -- common/autotest_common.sh@941 -- # uname 00:32:25.328 09:01:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:25.328 09:01:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64215 00:32:25.328 09:01:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:25.328 09:01:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:25.328 killing process with pid 64215 00:32:25.328 09:01:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64215' 00:32:25.328 09:01:27 -- common/autotest_common.sh@955 -- # kill 64215 00:32:25.328 09:01:27 -- common/autotest_common.sh@960 -- # wait 64215 00:32:27.913 00:32:27.913 real 0m5.209s 00:32:27.913 user 0m13.455s 00:32:27.913 sys 0m0.687s 00:32:27.913 09:01:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:27.913 ************************************ 00:32:27.913 END TEST locking_overlapped_coremask 00:32:27.913 ************************************ 00:32:27.913 09:01:30 -- common/autotest_common.sh@10 -- # set +x 00:32:28.171 09:01:30 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:32:28.171 09:01:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:28.171 09:01:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:28.171 09:01:30 -- common/autotest_common.sh@10 -- # set +x 00:32:28.171 ************************************ 00:32:28.171 START TEST locking_overlapped_coremask_via_rpc 00:32:28.171 ************************************ 00:32:28.171 09:01:30 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:32:28.171 09:01:30 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64312 00:32:28.171 09:01:30 -- event/cpu_locks.sh@149 -- # waitforlisten 64312 /var/tmp/spdk.sock 00:32:28.171 09:01:30 -- common/autotest_common.sh@817 -- # '[' -z 64312 ']' 00:32:28.171 09:01:30 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:32:28.171 09:01:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.171 09:01:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:28.171 09:01:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.171 09:01:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:28.171 09:01:30 -- common/autotest_common.sh@10 -- # set +x 00:32:28.429 [2024-04-18 09:01:30.317607] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:32:28.429 [2024-04-18 09:01:30.318031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64312 ] 00:32:28.429 [2024-04-18 09:01:30.502665] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:32:28.429 [2024-04-18 09:01:30.502752] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:28.687 [2024-04-18 09:01:30.778501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.687 [2024-04-18 09:01:30.778552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.687 [2024-04-18 09:01:30.778564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:30.065 09:01:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:30.065 09:01:31 -- common/autotest_common.sh@850 -- # return 0 00:32:30.065 09:01:31 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64341 00:32:30.065 09:01:31 -- event/cpu_locks.sh@153 -- # waitforlisten 64341 /var/tmp/spdk2.sock 00:32:30.065 09:01:31 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:32:30.065 09:01:31 -- common/autotest_common.sh@817 -- # '[' -z 64341 ']' 00:32:30.065 09:01:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:32:30.065 09:01:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:30.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:32:30.065 09:01:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:32:30.065 09:01:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:30.065 09:01:31 -- common/autotest_common.sh@10 -- # set +x 00:32:30.065 [2024-04-18 09:01:31.999623] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:32:30.065 [2024-04-18 09:01:31.999794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64341 ] 00:32:30.323 [2024-04-18 09:01:32.202431] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:32:30.323 [2024-04-18 09:01:32.202541] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:30.891 [2024-04-18 09:01:32.727422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:30.891 [2024-04-18 09:01:32.727441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:30.891 [2024-04-18 09:01:32.727450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:32:32.793 09:01:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:32.793 09:01:34 -- common/autotest_common.sh@850 -- # return 0 00:32:32.793 09:01:34 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:32:32.793 09:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.793 09:01:34 -- common/autotest_common.sh@10 -- # set +x 00:32:32.793 09:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.793 09:01:34 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:32:32.793 09:01:34 -- common/autotest_common.sh@638 -- # local es=0 00:32:32.793 09:01:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:32:32.793 09:01:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:32:32.793 09:01:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:32.793 09:01:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:32:32.793 09:01:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:32.793 09:01:34 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:32:32.793 09:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.793 09:01:34 -- common/autotest_common.sh@10 -- # set +x 00:32:32.793 [2024-04-18 09:01:34.861626] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64312 has claimed it. 00:32:32.793 request: 00:32:32.793 { 00:32:32.793 "method": "framework_enable_cpumask_locks", 00:32:32.793 "req_id": 1 00:32:32.793 } 00:32:32.793 Got JSON-RPC error response 00:32:32.793 response: 00:32:32.793 { 00:32:32.793 "code": -32603, 00:32:32.793 "message": "Failed to claim CPU core: 2" 00:32:32.793 } 00:32:32.793 09:01:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:32:32.793 09:01:34 -- common/autotest_common.sh@641 -- # es=1 00:32:32.793 09:01:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:32.793 09:01:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:32.793 09:01:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:32.793 09:01:34 -- event/cpu_locks.sh@158 -- # waitforlisten 64312 /var/tmp/spdk.sock 00:32:32.793 09:01:34 -- common/autotest_common.sh@817 -- # '[' -z 64312 ']' 00:32:32.793 09:01:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.793 09:01:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:32.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.793 09:01:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.793 09:01:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:32.793 09:01:34 -- common/autotest_common.sh@10 -- # set +x 00:32:33.051 09:01:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:33.051 09:01:35 -- common/autotest_common.sh@850 -- # return 0 00:32:33.051 09:01:35 -- event/cpu_locks.sh@159 -- # waitforlisten 64341 /var/tmp/spdk2.sock 00:32:33.051 09:01:35 -- common/autotest_common.sh@817 -- # '[' -z 64341 ']' 00:32:33.051 09:01:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:32:33.051 09:01:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:33.051 09:01:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:32:33.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:32:33.051 09:01:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:33.051 09:01:35 -- common/autotest_common.sh@10 -- # set +x 00:32:33.309 09:01:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:33.309 09:01:35 -- common/autotest_common.sh@850 -- # return 0 00:32:33.309 09:01:35 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:32:33.309 09:01:35 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:32:33.309 09:01:35 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:32:33.309 09:01:35 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:32:33.309 00:32:33.309 real 0m5.242s 00:32:33.309 user 0m1.607s 00:32:33.309 sys 0m0.288s 00:32:33.309 09:01:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:33.309 09:01:35 -- common/autotest_common.sh@10 -- # set +x 00:32:33.309 ************************************ 00:32:33.309 END TEST locking_overlapped_coremask_via_rpc 00:32:33.309 ************************************ 00:32:33.567 09:01:35 -- event/cpu_locks.sh@174 -- # cleanup 00:32:33.567 09:01:35 -- event/cpu_locks.sh@15 -- # [[ -z 64312 ]] 00:32:33.567 09:01:35 -- event/cpu_locks.sh@15 -- # killprocess 64312 00:32:33.567 09:01:35 -- common/autotest_common.sh@936 -- # '[' -z 64312 ']' 00:32:33.567 09:01:35 -- common/autotest_common.sh@940 -- # kill -0 64312 00:32:33.567 09:01:35 -- common/autotest_common.sh@941 -- # uname 00:32:33.567 09:01:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:33.567 09:01:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64312 00:32:33.567 killing process with pid 64312 00:32:33.567 09:01:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:33.567 09:01:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:33.567 09:01:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64312' 00:32:33.567 09:01:35 -- common/autotest_common.sh@955 -- # kill 64312 00:32:33.567 09:01:35 -- common/autotest_common.sh@960 -- # wait 64312 00:32:36.925 09:01:38 -- event/cpu_locks.sh@16 -- # [[ -z 64341 ]] 00:32:36.925 09:01:38 -- event/cpu_locks.sh@16 -- # killprocess 64341 00:32:36.925 09:01:38 -- common/autotest_common.sh@936 -- # '[' -z 64341 ']' 00:32:36.925 09:01:38 -- common/autotest_common.sh@940 -- # kill -0 64341 00:32:36.925 09:01:38 -- common/autotest_common.sh@941 -- # uname 00:32:36.925 09:01:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:36.925 09:01:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64341 00:32:36.925 killing process with pid 64341 00:32:36.925 09:01:38 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:32:36.925 09:01:38 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:32:36.925 09:01:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64341' 00:32:36.925 09:01:38 -- common/autotest_common.sh@955 -- # kill 64341 00:32:36.925 09:01:38 -- common/autotest_common.sh@960 -- # wait 64341 00:32:39.458 09:01:41 -- event/cpu_locks.sh@18 -- # rm -f 00:32:39.458 09:01:41 -- event/cpu_locks.sh@1 -- # cleanup 00:32:39.458 09:01:41 -- event/cpu_locks.sh@15 -- # [[ -z 64312 ]] 00:32:39.458 09:01:41 -- event/cpu_locks.sh@15 -- # killprocess 64312 00:32:39.458 09:01:41 -- common/autotest_common.sh@936 -- # '[' -z 64312 ']' 00:32:39.458 09:01:41 -- common/autotest_common.sh@940 -- # kill -0 64312 00:32:39.458 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (64312) - No such process 00:32:39.458 Process with pid 64312 is not found 00:32:39.458 09:01:41 -- common/autotest_common.sh@963 -- # echo 'Process with pid 64312 is not found' 00:32:39.458 09:01:41 -- event/cpu_locks.sh@16 -- # [[ -z 64341 ]] 00:32:39.458 Process with pid 64341 is not found 00:32:39.458 09:01:41 -- event/cpu_locks.sh@16 -- # killprocess 64341 00:32:39.458 09:01:41 -- common/autotest_common.sh@936 -- # '[' -z 64341 ']' 00:32:39.458 09:01:41 -- common/autotest_common.sh@940 -- # kill -0 64341 00:32:39.458 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (64341) - No such process 00:32:39.458 09:01:41 -- common/autotest_common.sh@963 -- # echo 'Process with pid 64341 is not found' 00:32:39.458 09:01:41 -- event/cpu_locks.sh@18 -- # rm -f 00:32:39.458 ************************************ 00:32:39.458 END TEST cpu_locks 00:32:39.458 ************************************ 00:32:39.458 00:32:39.458 real 0m59.433s 00:32:39.458 user 1m38.772s 00:32:39.458 sys 0m8.084s 00:32:39.458 09:01:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:39.458 09:01:41 -- common/autotest_common.sh@10 -- # set +x 00:32:39.458 ************************************ 00:32:39.458 END TEST event 00:32:39.458 ************************************ 00:32:39.458 00:32:39.458 real 1m33.818s 00:32:39.458 user 2m42.028s 00:32:39.458 sys 0m12.953s 00:32:39.458 09:01:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:39.458 09:01:41 -- common/autotest_common.sh@10 -- # set +x 00:32:39.458 09:01:41 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:32:39.458 09:01:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:39.458 09:01:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:39.458 09:01:41 -- common/autotest_common.sh@10 -- # set +x 00:32:39.458 ************************************ 00:32:39.458 START TEST thread 00:32:39.458 ************************************ 00:32:39.458 09:01:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:32:39.458 * Looking for test storage... 00:32:39.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:32:39.458 09:01:41 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:32:39.458 09:01:41 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:32:39.458 09:01:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:39.458 09:01:41 -- common/autotest_common.sh@10 -- # set +x 00:32:39.458 ************************************ 00:32:39.458 START TEST thread_poller_perf 00:32:39.458 ************************************ 00:32:39.458 09:01:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:32:39.458 [2024-04-18 09:01:41.426714] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:32:39.458 [2024-04-18 09:01:41.427729] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64550 ] 00:32:39.717 [2024-04-18 09:01:41.612553] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.976 [2024-04-18 09:01:41.893931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.976 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:32:41.394 ====================================== 00:32:41.394 busy:2115212774 (cyc) 00:32:41.394 total_run_count: 344000 00:32:41.394 tsc_hz: 2100000000 (cyc) 00:32:41.394 ====================================== 00:32:41.394 poller_cost: 6148 (cyc), 2927 (nsec) 00:32:41.394 00:32:41.394 real 0m2.021s 00:32:41.394 user 0m1.754s 00:32:41.394 sys 0m0.152s 00:32:41.394 09:01:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:41.394 ************************************ 00:32:41.394 END TEST thread_poller_perf 00:32:41.394 ************************************ 00:32:41.394 09:01:43 -- common/autotest_common.sh@10 -- # set +x 00:32:41.394 09:01:43 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:32:41.394 09:01:43 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:32:41.394 09:01:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:41.394 09:01:43 -- common/autotest_common.sh@10 -- # set +x 00:32:41.653 ************************************ 00:32:41.653 START TEST thread_poller_perf 00:32:41.653 ************************************ 00:32:41.653 09:01:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:32:41.653 [2024-04-18 09:01:43.575493] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:32:41.653 [2024-04-18 09:01:43.575902] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64596 ] 00:32:41.911 [2024-04-18 09:01:43.765749] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.169 [2024-04-18 09:01:44.110435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.169 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:32:43.546 ====================================== 00:32:43.546 busy:2103489678 (cyc) 00:32:43.546 total_run_count: 4610000 00:32:43.546 tsc_hz: 2100000000 (cyc) 00:32:43.546 ====================================== 00:32:43.546 poller_cost: 456 (cyc), 217 (nsec) 00:32:43.546 ************************************ 00:32:43.546 END TEST thread_poller_perf 00:32:43.546 ************************************ 00:32:43.546 00:32:43.546 real 0m2.064s 00:32:43.546 user 0m1.794s 00:32:43.546 sys 0m0.154s 00:32:43.546 09:01:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:43.546 09:01:45 -- common/autotest_common.sh@10 -- # set +x 00:32:43.546 09:01:45 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:32:43.546 ************************************ 00:32:43.546 END TEST thread 00:32:43.546 ************************************ 00:32:43.546 00:32:43.546 real 0m4.438s 00:32:43.546 user 0m3.677s 00:32:43.546 sys 0m0.499s 00:32:43.546 09:01:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:43.546 09:01:45 -- common/autotest_common.sh@10 -- # set +x 00:32:43.806 09:01:45 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:32:43.806 09:01:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:43.806 09:01:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:43.806 09:01:45 -- common/autotest_common.sh@10 -- # set +x 00:32:43.806 ************************************ 00:32:43.806 START TEST accel 00:32:43.806 ************************************ 00:32:43.806 09:01:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:32:43.806 * Looking for test storage... 00:32:43.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:32:43.806 09:01:45 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:32:43.806 09:01:45 -- accel/accel.sh@82 -- # get_expected_opcs 00:32:43.806 09:01:45 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:32:43.806 09:01:45 -- accel/accel.sh@62 -- # spdk_tgt_pid=64683 00:32:43.806 09:01:45 -- accel/accel.sh@63 -- # waitforlisten 64683 00:32:43.806 09:01:45 -- common/autotest_common.sh@817 -- # '[' -z 64683 ']' 00:32:43.806 09:01:45 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:32:43.806 09:01:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.806 09:01:45 -- accel/accel.sh@61 -- # build_accel_config 00:32:43.806 09:01:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:43.806 09:01:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.806 09:01:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:43.806 09:01:45 -- common/autotest_common.sh@10 -- # set +x 00:32:43.806 09:01:45 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:43.806 09:01:45 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:43.806 09:01:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:43.806 09:01:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:43.806 09:01:45 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:43.806 09:01:45 -- accel/accel.sh@40 -- # local IFS=, 00:32:43.806 09:01:45 -- accel/accel.sh@41 -- # jq -r . 00:32:44.064 [2024-04-18 09:01:45.973061] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:32:44.064 [2024-04-18 09:01:45.973569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64683 ] 00:32:44.064 [2024-04-18 09:01:46.162498] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.631 [2024-04-18 09:01:46.485204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.564 09:01:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:45.564 09:01:47 -- common/autotest_common.sh@850 -- # return 0 00:32:45.564 09:01:47 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:32:45.564 09:01:47 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:32:45.564 09:01:47 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:32:45.564 09:01:47 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:32:45.564 09:01:47 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:32:45.564 09:01:47 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:32:45.564 09:01:47 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:32:45.564 09:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:45.564 09:01:47 -- common/autotest_common.sh@10 -- # set +x 00:32:45.564 09:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:45.564 09:01:47 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # IFS== 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # read -r opc module 00:32:45.564 09:01:47 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:45.564 09:01:47 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # IFS== 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # read -r opc module 00:32:45.564 09:01:47 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:45.564 09:01:47 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # IFS== 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # read -r opc module 00:32:45.564 09:01:47 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:45.564 09:01:47 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # IFS== 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # read -r opc module 00:32:45.564 09:01:47 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:45.564 09:01:47 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # IFS== 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # read -r opc module 00:32:45.564 09:01:47 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:45.564 09:01:47 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # IFS== 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # read -r opc module 00:32:45.564 09:01:47 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:45.564 09:01:47 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # IFS== 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # read -r opc module 00:32:45.564 09:01:47 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:45.564 09:01:47 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # IFS== 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # read -r opc module 00:32:45.564 09:01:47 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:45.564 09:01:47 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # IFS== 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # read -r opc module 00:32:45.564 09:01:47 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:45.564 09:01:47 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # IFS== 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # read -r opc module 00:32:45.564 09:01:47 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:45.564 09:01:47 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # IFS== 00:32:45.564 09:01:47 -- accel/accel.sh@72 -- # read -r opc module 00:32:45.564 09:01:47 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:45.565 09:01:47 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:45.565 09:01:47 -- accel/accel.sh@72 -- # IFS== 00:32:45.565 09:01:47 -- accel/accel.sh@72 -- # read -r opc module 00:32:45.565 09:01:47 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:45.565 09:01:47 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:45.565 09:01:47 -- accel/accel.sh@72 -- # IFS== 00:32:45.565 09:01:47 -- accel/accel.sh@72 -- # read -r opc module 00:32:45.565 09:01:47 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:45.565 09:01:47 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:45.565 09:01:47 -- accel/accel.sh@72 -- # IFS== 00:32:45.565 09:01:47 -- accel/accel.sh@72 -- # read -r opc module 00:32:45.565 09:01:47 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:45.565 09:01:47 -- accel/accel.sh@75 -- # killprocess 64683 00:32:45.565 09:01:47 -- common/autotest_common.sh@936 -- # '[' -z 64683 ']' 00:32:45.565 09:01:47 -- common/autotest_common.sh@940 -- # kill -0 64683 00:32:45.565 09:01:47 -- common/autotest_common.sh@941 -- # uname 00:32:45.565 09:01:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:45.565 09:01:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64683 00:32:45.565 09:01:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:45.565 09:01:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:45.565 09:01:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64683' 00:32:45.565 killing process with pid 64683 00:32:45.565 09:01:47 -- common/autotest_common.sh@955 -- # kill 64683 00:32:45.565 09:01:47 -- common/autotest_common.sh@960 -- # wait 64683 00:32:48.885 09:01:50 -- accel/accel.sh@76 -- # trap - ERR 00:32:48.885 09:01:50 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:32:48.885 09:01:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:32:48.885 09:01:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:48.885 09:01:50 -- common/autotest_common.sh@10 -- # set +x 00:32:48.885 09:01:50 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:32:48.885 09:01:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:32:48.885 09:01:50 -- accel/accel.sh@12 -- # build_accel_config 00:32:48.885 09:01:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:48.885 09:01:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:48.885 09:01:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:48.885 09:01:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:48.885 09:01:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:48.885 09:01:50 -- accel/accel.sh@40 -- # local IFS=, 00:32:48.885 09:01:50 -- accel/accel.sh@41 -- # jq -r . 00:32:48.885 09:01:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:48.885 09:01:50 -- common/autotest_common.sh@10 -- # set +x 00:32:48.885 09:01:50 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:32:48.885 09:01:50 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:32:48.885 09:01:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:48.885 09:01:50 -- common/autotest_common.sh@10 -- # set +x 00:32:48.885 ************************************ 00:32:48.885 START TEST accel_missing_filename 00:32:48.885 ************************************ 00:32:48.885 09:01:50 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:32:48.885 09:01:50 -- common/autotest_common.sh@638 -- # local es=0 00:32:48.885 09:01:50 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:32:48.885 09:01:50 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:32:48.885 09:01:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:48.885 09:01:50 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:32:48.885 09:01:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:48.885 09:01:50 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:32:48.885 09:01:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:32:48.885 09:01:50 -- accel/accel.sh@12 -- # build_accel_config 00:32:48.885 09:01:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:48.886 09:01:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:48.886 09:01:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:48.886 09:01:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:48.886 09:01:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:48.886 09:01:50 -- accel/accel.sh@40 -- # local IFS=, 00:32:48.886 09:01:50 -- accel/accel.sh@41 -- # jq -r . 00:32:48.886 [2024-04-18 09:01:50.621778] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:32:48.886 [2024-04-18 09:01:50.622144] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64778 ] 00:32:48.886 [2024-04-18 09:01:50.807008] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.178 [2024-04-18 09:01:51.143583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.436 [2024-04-18 09:01:51.421506] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:50.004 [2024-04-18 09:01:52.033305] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:32:50.572 A filename is required. 00:32:50.572 09:01:52 -- common/autotest_common.sh@641 -- # es=234 00:32:50.572 09:01:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:50.572 09:01:52 -- common/autotest_common.sh@650 -- # es=106 00:32:50.572 09:01:52 -- common/autotest_common.sh@651 -- # case "$es" in 00:32:50.572 09:01:52 -- common/autotest_common.sh@658 -- # es=1 00:32:50.572 09:01:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:50.572 00:32:50.572 real 0m1.958s 00:32:50.572 user 0m1.692s 00:32:50.572 sys 0m0.196s 00:32:50.572 ************************************ 00:32:50.572 END TEST accel_missing_filename 00:32:50.572 ************************************ 00:32:50.572 09:01:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:50.572 09:01:52 -- common/autotest_common.sh@10 -- # set +x 00:32:50.572 09:01:52 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:50.572 09:01:52 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:32:50.572 09:01:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:50.572 09:01:52 -- common/autotest_common.sh@10 -- # set +x 00:32:50.572 ************************************ 00:32:50.572 START TEST accel_compress_verify 00:32:50.572 ************************************ 00:32:50.572 09:01:52 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:50.572 09:01:52 -- common/autotest_common.sh@638 -- # local es=0 00:32:50.572 09:01:52 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:50.572 09:01:52 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:32:50.572 09:01:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:50.572 09:01:52 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:32:50.572 09:01:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:50.572 09:01:52 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:50.572 09:01:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:50.572 09:01:52 -- accel/accel.sh@12 -- # build_accel_config 00:32:50.572 09:01:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:50.572 09:01:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:50.572 09:01:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:50.572 09:01:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:50.572 09:01:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:50.572 09:01:52 -- accel/accel.sh@40 -- # local IFS=, 00:32:50.572 09:01:52 -- accel/accel.sh@41 -- # jq -r . 00:32:50.831 [2024-04-18 09:01:52.735486] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:32:50.831 [2024-04-18 09:01:52.735985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64824 ] 00:32:50.831 [2024-04-18 09:01:52.929218] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.424 [2024-04-18 09:01:53.298928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.682 [2024-04-18 09:01:53.571367] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:52.250 [2024-04-18 09:01:54.172103] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:32:52.819 00:32:52.819 Compression does not support the verify option, aborting. 00:32:52.819 09:01:54 -- common/autotest_common.sh@641 -- # es=161 00:32:52.819 09:01:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:52.819 09:01:54 -- common/autotest_common.sh@650 -- # es=33 00:32:52.819 09:01:54 -- common/autotest_common.sh@651 -- # case "$es" in 00:32:52.819 09:01:54 -- common/autotest_common.sh@658 -- # es=1 00:32:52.819 09:01:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:52.819 00:32:52.819 real 0m1.991s 00:32:52.819 user 0m1.685s 00:32:52.819 sys 0m0.227s 00:32:52.819 09:01:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:52.819 09:01:54 -- common/autotest_common.sh@10 -- # set +x 00:32:52.819 ************************************ 00:32:52.819 END TEST accel_compress_verify 00:32:52.819 ************************************ 00:32:52.819 09:01:54 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:32:52.819 09:01:54 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:32:52.819 09:01:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:52.819 09:01:54 -- common/autotest_common.sh@10 -- # set +x 00:32:52.819 ************************************ 00:32:52.819 START TEST accel_wrong_workload 00:32:52.819 ************************************ 00:32:52.819 09:01:54 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:32:52.819 09:01:54 -- common/autotest_common.sh@638 -- # local es=0 00:32:52.819 09:01:54 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:32:52.819 09:01:54 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:32:52.819 09:01:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:52.819 09:01:54 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:32:52.819 09:01:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:52.819 09:01:54 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:32:52.819 09:01:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:32:52.819 09:01:54 -- accel/accel.sh@12 -- # build_accel_config 00:32:52.819 09:01:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:52.819 09:01:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:52.819 09:01:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:52.819 09:01:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:52.819 09:01:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:52.819 09:01:54 -- accel/accel.sh@40 -- # local IFS=, 00:32:52.819 09:01:54 -- accel/accel.sh@41 -- # jq -r . 00:32:52.819 Unsupported workload type: foobar 00:32:52.819 [2024-04-18 09:01:54.833943] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:32:52.819 accel_perf options: 00:32:52.819 [-h help message] 00:32:52.819 [-q queue depth per core] 00:32:52.819 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:32:52.819 [-T number of threads per core 00:32:52.819 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:32:52.819 [-t time in seconds] 00:32:52.819 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:32:52.819 [ dif_verify, , dif_generate, dif_generate_copy 00:32:52.819 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:32:52.819 [-l for compress/decompress workloads, name of uncompressed input file 00:32:52.819 [-S for crc32c workload, use this seed value (default 0) 00:32:52.819 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:32:52.819 [-f for fill workload, use this BYTE value (default 255) 00:32:52.819 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:32:52.819 [-y verify result if this switch is on] 00:32:52.819 [-a tasks to allocate per core (default: same value as -q)] 00:32:52.819 Can be used to spread operations across a wider range of memory. 00:32:52.819 09:01:54 -- common/autotest_common.sh@641 -- # es=1 00:32:52.819 09:01:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:52.819 09:01:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:52.819 09:01:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:52.819 00:32:52.819 real 0m0.084s 00:32:52.819 user 0m0.077s 00:32:52.819 sys 0m0.044s 00:32:52.819 09:01:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:52.819 09:01:54 -- common/autotest_common.sh@10 -- # set +x 00:32:52.819 ************************************ 00:32:52.819 END TEST accel_wrong_workload 00:32:52.819 ************************************ 00:32:52.819 09:01:54 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:32:52.819 09:01:54 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:32:52.819 09:01:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:52.819 09:01:54 -- common/autotest_common.sh@10 -- # set +x 00:32:53.078 ************************************ 00:32:53.078 START TEST accel_negative_buffers 00:32:53.078 ************************************ 00:32:53.078 09:01:54 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:32:53.078 09:01:54 -- common/autotest_common.sh@638 -- # local es=0 00:32:53.078 09:01:54 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:32:53.078 09:01:54 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:32:53.078 09:01:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:53.078 09:01:54 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:32:53.078 09:01:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:53.078 09:01:54 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:32:53.078 09:01:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:32:53.078 09:01:54 -- accel/accel.sh@12 -- # build_accel_config 00:32:53.078 09:01:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:53.078 09:01:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:53.078 09:01:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:53.078 09:01:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:53.078 09:01:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:53.078 09:01:54 -- accel/accel.sh@40 -- # local IFS=, 00:32:53.078 09:01:54 -- accel/accel.sh@41 -- # jq -r . 00:32:53.078 -x option must be non-negative. 00:32:53.078 [2024-04-18 09:01:55.053664] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:32:53.078 accel_perf options: 00:32:53.078 [-h help message] 00:32:53.078 [-q queue depth per core] 00:32:53.078 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:32:53.078 [-T number of threads per core 00:32:53.078 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:32:53.078 [-t time in seconds] 00:32:53.078 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:32:53.078 [ dif_verify, , dif_generate, dif_generate_copy 00:32:53.078 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:32:53.078 [-l for compress/decompress workloads, name of uncompressed input file 00:32:53.078 [-S for crc32c workload, use this seed value (default 0) 00:32:53.078 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:32:53.078 [-f for fill workload, use this BYTE value (default 255) 00:32:53.078 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:32:53.078 [-y verify result if this switch is on] 00:32:53.078 [-a tasks to allocate per core (default: same value as -q)] 00:32:53.078 Can be used to spread operations across a wider range of memory. 00:32:53.078 09:01:55 -- common/autotest_common.sh@641 -- # es=1 00:32:53.078 09:01:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:53.078 09:01:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:53.078 09:01:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:53.078 00:32:53.078 real 0m0.107s 00:32:53.078 user 0m0.090s 00:32:53.078 sys 0m0.055s 00:32:53.078 09:01:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:53.078 09:01:55 -- common/autotest_common.sh@10 -- # set +x 00:32:53.078 ************************************ 00:32:53.078 END TEST accel_negative_buffers 00:32:53.078 ************************************ 00:32:53.078 09:01:55 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:32:53.078 09:01:55 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:32:53.078 09:01:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:53.078 09:01:55 -- common/autotest_common.sh@10 -- # set +x 00:32:53.336 ************************************ 00:32:53.336 START TEST accel_crc32c 00:32:53.336 ************************************ 00:32:53.336 09:01:55 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:32:53.336 09:01:55 -- accel/accel.sh@16 -- # local accel_opc 00:32:53.336 09:01:55 -- accel/accel.sh@17 -- # local accel_module 00:32:53.336 09:01:55 -- accel/accel.sh@19 -- # IFS=: 00:32:53.336 09:01:55 -- accel/accel.sh@19 -- # read -r var val 00:32:53.336 09:01:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:32:53.336 09:01:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:32:53.336 09:01:55 -- accel/accel.sh@12 -- # build_accel_config 00:32:53.336 09:01:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:53.336 09:01:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:53.336 09:01:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:53.336 09:01:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:53.336 09:01:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:53.336 09:01:55 -- accel/accel.sh@40 -- # local IFS=, 00:32:53.336 09:01:55 -- accel/accel.sh@41 -- # jq -r . 00:32:53.336 [2024-04-18 09:01:55.277027] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:32:53.336 [2024-04-18 09:01:55.277361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64918 ] 00:32:53.595 [2024-04-18 09:01:55.467165] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.857 [2024-04-18 09:01:55.815527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.117 09:01:56 -- accel/accel.sh@20 -- # val= 00:32:54.117 09:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # IFS=: 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # read -r var val 00:32:54.117 09:01:56 -- accel/accel.sh@20 -- # val= 00:32:54.117 09:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # IFS=: 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # read -r var val 00:32:54.117 09:01:56 -- accel/accel.sh@20 -- # val=0x1 00:32:54.117 09:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # IFS=: 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # read -r var val 00:32:54.117 09:01:56 -- accel/accel.sh@20 -- # val= 00:32:54.117 09:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # IFS=: 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # read -r var val 00:32:54.117 09:01:56 -- accel/accel.sh@20 -- # val= 00:32:54.117 09:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # IFS=: 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # read -r var val 00:32:54.117 09:01:56 -- accel/accel.sh@20 -- # val=crc32c 00:32:54.117 09:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:32:54.117 09:01:56 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # IFS=: 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # read -r var val 00:32:54.117 09:01:56 -- accel/accel.sh@20 -- # val=32 00:32:54.117 09:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # IFS=: 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # read -r var val 00:32:54.117 09:01:56 -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:54.117 09:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # IFS=: 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # read -r var val 00:32:54.117 09:01:56 -- accel/accel.sh@20 -- # val= 00:32:54.117 09:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # IFS=: 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # read -r var val 00:32:54.117 09:01:56 -- accel/accel.sh@20 -- # val=software 00:32:54.117 09:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:32:54.117 09:01:56 -- accel/accel.sh@22 -- # accel_module=software 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # IFS=: 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # read -r var val 00:32:54.117 09:01:56 -- accel/accel.sh@20 -- # val=32 00:32:54.117 09:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # IFS=: 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # read -r var val 00:32:54.117 09:01:56 -- accel/accel.sh@20 -- # val=32 00:32:54.117 09:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:32:54.117 09:01:56 -- accel/accel.sh@19 -- # IFS=: 00:32:54.118 09:01:56 -- accel/accel.sh@19 -- # read -r var val 00:32:54.118 09:01:56 -- accel/accel.sh@20 -- # val=1 00:32:54.118 09:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:32:54.118 09:01:56 -- accel/accel.sh@19 -- # IFS=: 00:32:54.118 09:01:56 -- accel/accel.sh@19 -- # read -r var val 00:32:54.118 09:01:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:32:54.118 09:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:32:54.118 09:01:56 -- accel/accel.sh@19 -- # IFS=: 00:32:54.118 09:01:56 -- accel/accel.sh@19 -- # read -r var val 00:32:54.118 09:01:56 -- accel/accel.sh@20 -- # val=Yes 00:32:54.118 09:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:32:54.118 09:01:56 -- accel/accel.sh@19 -- # IFS=: 00:32:54.118 09:01:56 -- accel/accel.sh@19 -- # read -r var val 00:32:54.118 09:01:56 -- accel/accel.sh@20 -- # val= 00:32:54.118 09:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:32:54.118 09:01:56 -- accel/accel.sh@19 -- # IFS=: 00:32:54.118 09:01:56 -- accel/accel.sh@19 -- # read -r var val 00:32:54.118 09:01:56 -- accel/accel.sh@20 -- # val= 00:32:54.118 09:01:56 -- accel/accel.sh@21 -- # case "$var" in 00:32:54.118 09:01:56 -- accel/accel.sh@19 -- # IFS=: 00:32:54.118 09:01:56 -- accel/accel.sh@19 -- # read -r var val 00:32:56.650 09:01:58 -- accel/accel.sh@20 -- # val= 00:32:56.650 09:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:32:56.650 09:01:58 -- accel/accel.sh@19 -- # IFS=: 00:32:56.650 09:01:58 -- accel/accel.sh@19 -- # read -r var val 00:32:56.650 09:01:58 -- accel/accel.sh@20 -- # val= 00:32:56.650 09:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:32:56.650 09:01:58 -- accel/accel.sh@19 -- # IFS=: 00:32:56.650 09:01:58 -- accel/accel.sh@19 -- # read -r var val 00:32:56.650 09:01:58 -- accel/accel.sh@20 -- # val= 00:32:56.650 09:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:32:56.650 09:01:58 -- accel/accel.sh@19 -- # IFS=: 00:32:56.651 09:01:58 -- accel/accel.sh@19 -- # read -r var val 00:32:56.651 09:01:58 -- accel/accel.sh@20 -- # val= 00:32:56.651 09:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:32:56.651 09:01:58 -- accel/accel.sh@19 -- # IFS=: 00:32:56.651 09:01:58 -- accel/accel.sh@19 -- # read -r var val 00:32:56.651 09:01:58 -- accel/accel.sh@20 -- # val= 00:32:56.651 09:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:32:56.651 09:01:58 -- accel/accel.sh@19 -- # IFS=: 00:32:56.651 09:01:58 -- accel/accel.sh@19 -- # read -r var val 00:32:56.651 09:01:58 -- accel/accel.sh@20 -- # val= 00:32:56.651 09:01:58 -- accel/accel.sh@21 -- # case "$var" in 00:32:56.651 09:01:58 -- accel/accel.sh@19 -- # IFS=: 00:32:56.651 09:01:58 -- accel/accel.sh@19 -- # read -r var val 00:32:56.651 09:01:58 -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:56.651 09:01:58 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:32:56.651 09:01:58 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:56.651 00:32:56.651 real 0m3.015s 00:32:56.651 user 0m2.691s 00:32:56.651 sys 0m0.215s 00:32:56.651 09:01:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:56.651 09:01:58 -- common/autotest_common.sh@10 -- # set +x 00:32:56.651 ************************************ 00:32:56.651 END TEST accel_crc32c 00:32:56.651 ************************************ 00:32:56.651 09:01:58 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:32:56.651 09:01:58 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:32:56.651 09:01:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:56.651 09:01:58 -- common/autotest_common.sh@10 -- # set +x 00:32:56.651 ************************************ 00:32:56.651 START TEST accel_crc32c_C2 00:32:56.651 ************************************ 00:32:56.651 09:01:58 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:32:56.651 09:01:58 -- accel/accel.sh@16 -- # local accel_opc 00:32:56.651 09:01:58 -- accel/accel.sh@17 -- # local accel_module 00:32:56.651 09:01:58 -- accel/accel.sh@19 -- # IFS=: 00:32:56.651 09:01:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:32:56.651 09:01:58 -- accel/accel.sh@19 -- # read -r var val 00:32:56.651 09:01:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:32:56.651 09:01:58 -- accel/accel.sh@12 -- # build_accel_config 00:32:56.651 09:01:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:56.651 09:01:58 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:56.651 09:01:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:56.651 09:01:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:56.651 09:01:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:56.651 09:01:58 -- accel/accel.sh@40 -- # local IFS=, 00:32:56.651 09:01:58 -- accel/accel.sh@41 -- # jq -r . 00:32:56.651 [2024-04-18 09:01:58.407768] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:32:56.651 [2024-04-18 09:01:58.408162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64976 ] 00:32:56.651 [2024-04-18 09:01:58.597938] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.909 [2024-04-18 09:01:58.863092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.167 09:01:59 -- accel/accel.sh@20 -- # val= 00:32:57.167 09:01:59 -- accel/accel.sh@21 -- # case "$var" in 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # IFS=: 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # read -r var val 00:32:57.167 09:01:59 -- accel/accel.sh@20 -- # val= 00:32:57.167 09:01:59 -- accel/accel.sh@21 -- # case "$var" in 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # IFS=: 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # read -r var val 00:32:57.167 09:01:59 -- accel/accel.sh@20 -- # val=0x1 00:32:57.167 09:01:59 -- accel/accel.sh@21 -- # case "$var" in 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # IFS=: 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # read -r var val 00:32:57.167 09:01:59 -- accel/accel.sh@20 -- # val= 00:32:57.167 09:01:59 -- accel/accel.sh@21 -- # case "$var" in 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # IFS=: 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # read -r var val 00:32:57.167 09:01:59 -- accel/accel.sh@20 -- # val= 00:32:57.167 09:01:59 -- accel/accel.sh@21 -- # case "$var" in 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # IFS=: 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # read -r var val 00:32:57.167 09:01:59 -- accel/accel.sh@20 -- # val=crc32c 00:32:57.167 09:01:59 -- accel/accel.sh@21 -- # case "$var" in 00:32:57.167 09:01:59 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # IFS=: 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # read -r var val 00:32:57.167 09:01:59 -- accel/accel.sh@20 -- # val=0 00:32:57.167 09:01:59 -- accel/accel.sh@21 -- # case "$var" in 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # IFS=: 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # read -r var val 00:32:57.167 09:01:59 -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:57.167 09:01:59 -- accel/accel.sh@21 -- # case "$var" in 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # IFS=: 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # read -r var val 00:32:57.167 09:01:59 -- accel/accel.sh@20 -- # val= 00:32:57.167 09:01:59 -- accel/accel.sh@21 -- # case "$var" in 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # IFS=: 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # read -r var val 00:32:57.167 09:01:59 -- accel/accel.sh@20 -- # val=software 00:32:57.167 09:01:59 -- accel/accel.sh@21 -- # case "$var" in 00:32:57.167 09:01:59 -- accel/accel.sh@22 -- # accel_module=software 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # IFS=: 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # read -r var val 00:32:57.167 09:01:59 -- accel/accel.sh@20 -- # val=32 00:32:57.167 09:01:59 -- accel/accel.sh@21 -- # case "$var" in 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # IFS=: 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # read -r var val 00:32:57.167 09:01:59 -- accel/accel.sh@20 -- # val=32 00:32:57.167 09:01:59 -- accel/accel.sh@21 -- # case "$var" in 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # IFS=: 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # read -r var val 00:32:57.167 09:01:59 -- accel/accel.sh@20 -- # val=1 00:32:57.167 09:01:59 -- accel/accel.sh@21 -- # case "$var" in 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # IFS=: 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # read -r var val 00:32:57.167 09:01:59 -- accel/accel.sh@20 -- # val='1 seconds' 00:32:57.167 09:01:59 -- accel/accel.sh@21 -- # case "$var" in 00:32:57.167 09:01:59 -- accel/accel.sh@19 -- # IFS=: 00:32:57.168 09:01:59 -- accel/accel.sh@19 -- # read -r var val 00:32:57.168 09:01:59 -- accel/accel.sh@20 -- # val=Yes 00:32:57.168 09:01:59 -- accel/accel.sh@21 -- # case "$var" in 00:32:57.168 09:01:59 -- accel/accel.sh@19 -- # IFS=: 00:32:57.168 09:01:59 -- accel/accel.sh@19 -- # read -r var val 00:32:57.168 09:01:59 -- accel/accel.sh@20 -- # val= 00:32:57.168 09:01:59 -- accel/accel.sh@21 -- # case "$var" in 00:32:57.168 09:01:59 -- accel/accel.sh@19 -- # IFS=: 00:32:57.168 09:01:59 -- accel/accel.sh@19 -- # read -r var val 00:32:57.168 09:01:59 -- accel/accel.sh@20 -- # val= 00:32:57.168 09:01:59 -- accel/accel.sh@21 -- # case "$var" in 00:32:57.168 09:01:59 -- accel/accel.sh@19 -- # IFS=: 00:32:57.168 09:01:59 -- accel/accel.sh@19 -- # read -r var val 00:32:59.752 09:02:01 -- accel/accel.sh@20 -- # val= 00:32:59.752 09:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:32:59.752 09:02:01 -- accel/accel.sh@19 -- # IFS=: 00:32:59.752 09:02:01 -- accel/accel.sh@19 -- # read -r var val 00:32:59.752 09:02:01 -- accel/accel.sh@20 -- # val= 00:32:59.752 09:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:32:59.752 09:02:01 -- accel/accel.sh@19 -- # IFS=: 00:32:59.752 09:02:01 -- accel/accel.sh@19 -- # read -r var val 00:32:59.752 09:02:01 -- accel/accel.sh@20 -- # val= 00:32:59.752 09:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:32:59.752 09:02:01 -- accel/accel.sh@19 -- # IFS=: 00:32:59.752 09:02:01 -- accel/accel.sh@19 -- # read -r var val 00:32:59.752 09:02:01 -- accel/accel.sh@20 -- # val= 00:32:59.752 09:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:32:59.752 09:02:01 -- accel/accel.sh@19 -- # IFS=: 00:32:59.752 09:02:01 -- accel/accel.sh@19 -- # read -r var val 00:32:59.752 09:02:01 -- accel/accel.sh@20 -- # val= 00:32:59.752 09:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:32:59.752 09:02:01 -- accel/accel.sh@19 -- # IFS=: 00:32:59.752 09:02:01 -- accel/accel.sh@19 -- # read -r var val 00:32:59.752 09:02:01 -- accel/accel.sh@20 -- # val= 00:32:59.752 09:02:01 -- accel/accel.sh@21 -- # case "$var" in 00:32:59.752 09:02:01 -- accel/accel.sh@19 -- # IFS=: 00:32:59.752 09:02:01 -- accel/accel.sh@19 -- # read -r var val 00:32:59.752 09:02:01 -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:59.752 09:02:01 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:32:59.752 09:02:01 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:59.752 00:32:59.752 real 0m2.965s 00:32:59.752 user 0m2.657s 00:32:59.752 sys 0m0.207s 00:32:59.752 09:02:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:59.752 09:02:01 -- common/autotest_common.sh@10 -- # set +x 00:32:59.752 ************************************ 00:32:59.752 END TEST accel_crc32c_C2 00:32:59.752 ************************************ 00:32:59.752 09:02:01 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:32:59.752 09:02:01 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:32:59.752 09:02:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:59.752 09:02:01 -- common/autotest_common.sh@10 -- # set +x 00:32:59.752 ************************************ 00:32:59.752 START TEST accel_copy 00:32:59.752 ************************************ 00:32:59.752 09:02:01 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:32:59.752 09:02:01 -- accel/accel.sh@16 -- # local accel_opc 00:32:59.752 09:02:01 -- accel/accel.sh@17 -- # local accel_module 00:32:59.752 09:02:01 -- accel/accel.sh@19 -- # IFS=: 00:32:59.752 09:02:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:32:59.752 09:02:01 -- accel/accel.sh@19 -- # read -r var val 00:32:59.752 09:02:01 -- accel/accel.sh@12 -- # build_accel_config 00:32:59.752 09:02:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:32:59.752 09:02:01 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:59.752 09:02:01 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:59.752 09:02:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:59.752 09:02:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:59.752 09:02:01 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:59.752 09:02:01 -- accel/accel.sh@40 -- # local IFS=, 00:32:59.752 09:02:01 -- accel/accel.sh@41 -- # jq -r . 00:32:59.752 [2024-04-18 09:02:01.496930] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:32:59.752 [2024-04-18 09:02:01.497252] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65033 ] 00:32:59.752 [2024-04-18 09:02:01.665834] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.010 [2024-04-18 09:02:01.932675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.268 09:02:02 -- accel/accel.sh@20 -- # val= 00:33:00.268 09:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # IFS=: 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # read -r var val 00:33:00.268 09:02:02 -- accel/accel.sh@20 -- # val= 00:33:00.268 09:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # IFS=: 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # read -r var val 00:33:00.268 09:02:02 -- accel/accel.sh@20 -- # val=0x1 00:33:00.268 09:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # IFS=: 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # read -r var val 00:33:00.268 09:02:02 -- accel/accel.sh@20 -- # val= 00:33:00.268 09:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # IFS=: 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # read -r var val 00:33:00.268 09:02:02 -- accel/accel.sh@20 -- # val= 00:33:00.268 09:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # IFS=: 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # read -r var val 00:33:00.268 09:02:02 -- accel/accel.sh@20 -- # val=copy 00:33:00.268 09:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:33:00.268 09:02:02 -- accel/accel.sh@23 -- # accel_opc=copy 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # IFS=: 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # read -r var val 00:33:00.268 09:02:02 -- accel/accel.sh@20 -- # val='4096 bytes' 00:33:00.268 09:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # IFS=: 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # read -r var val 00:33:00.268 09:02:02 -- accel/accel.sh@20 -- # val= 00:33:00.268 09:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # IFS=: 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # read -r var val 00:33:00.268 09:02:02 -- accel/accel.sh@20 -- # val=software 00:33:00.268 09:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:33:00.268 09:02:02 -- accel/accel.sh@22 -- # accel_module=software 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # IFS=: 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # read -r var val 00:33:00.268 09:02:02 -- accel/accel.sh@20 -- # val=32 00:33:00.268 09:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # IFS=: 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # read -r var val 00:33:00.268 09:02:02 -- accel/accel.sh@20 -- # val=32 00:33:00.268 09:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # IFS=: 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # read -r var val 00:33:00.268 09:02:02 -- accel/accel.sh@20 -- # val=1 00:33:00.268 09:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # IFS=: 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # read -r var val 00:33:00.268 09:02:02 -- accel/accel.sh@20 -- # val='1 seconds' 00:33:00.268 09:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # IFS=: 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # read -r var val 00:33:00.268 09:02:02 -- accel/accel.sh@20 -- # val=Yes 00:33:00.268 09:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # IFS=: 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # read -r var val 00:33:00.268 09:02:02 -- accel/accel.sh@20 -- # val= 00:33:00.268 09:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # IFS=: 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # read -r var val 00:33:00.268 09:02:02 -- accel/accel.sh@20 -- # val= 00:33:00.268 09:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # IFS=: 00:33:00.268 09:02:02 -- accel/accel.sh@19 -- # read -r var val 00:33:02.217 09:02:04 -- accel/accel.sh@20 -- # val= 00:33:02.217 09:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:33:02.217 09:02:04 -- accel/accel.sh@19 -- # IFS=: 00:33:02.217 09:02:04 -- accel/accel.sh@19 -- # read -r var val 00:33:02.217 09:02:04 -- accel/accel.sh@20 -- # val= 00:33:02.217 09:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:33:02.217 09:02:04 -- accel/accel.sh@19 -- # IFS=: 00:33:02.217 09:02:04 -- accel/accel.sh@19 -- # read -r var val 00:33:02.217 09:02:04 -- accel/accel.sh@20 -- # val= 00:33:02.217 09:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:33:02.217 09:02:04 -- accel/accel.sh@19 -- # IFS=: 00:33:02.217 09:02:04 -- accel/accel.sh@19 -- # read -r var val 00:33:02.217 09:02:04 -- accel/accel.sh@20 -- # val= 00:33:02.217 09:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:33:02.217 09:02:04 -- accel/accel.sh@19 -- # IFS=: 00:33:02.217 09:02:04 -- accel/accel.sh@19 -- # read -r var val 00:33:02.217 09:02:04 -- accel/accel.sh@20 -- # val= 00:33:02.217 09:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:33:02.217 09:02:04 -- accel/accel.sh@19 -- # IFS=: 00:33:02.217 09:02:04 -- accel/accel.sh@19 -- # read -r var val 00:33:02.217 09:02:04 -- accel/accel.sh@20 -- # val= 00:33:02.217 09:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:33:02.217 09:02:04 -- accel/accel.sh@19 -- # IFS=: 00:33:02.217 09:02:04 -- accel/accel.sh@19 -- # read -r var val 00:33:02.217 09:02:04 -- accel/accel.sh@27 -- # [[ -n software ]] 00:33:02.217 09:02:04 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:33:02.217 09:02:04 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:02.217 00:33:02.217 real 0m2.833s 00:33:02.217 user 0m2.526s 00:33:02.217 sys 0m0.206s 00:33:02.217 09:02:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:02.217 ************************************ 00:33:02.217 END TEST accel_copy 00:33:02.217 ************************************ 00:33:02.217 09:02:04 -- common/autotest_common.sh@10 -- # set +x 00:33:02.498 09:02:04 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:33:02.498 09:02:04 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:33:02.498 09:02:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:02.498 09:02:04 -- common/autotest_common.sh@10 -- # set +x 00:33:02.498 ************************************ 00:33:02.498 START TEST accel_fill 00:33:02.498 ************************************ 00:33:02.498 09:02:04 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:33:02.498 09:02:04 -- accel/accel.sh@16 -- # local accel_opc 00:33:02.499 09:02:04 -- accel/accel.sh@17 -- # local accel_module 00:33:02.499 09:02:04 -- accel/accel.sh@19 -- # IFS=: 00:33:02.499 09:02:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:33:02.499 09:02:04 -- accel/accel.sh@19 -- # read -r var val 00:33:02.499 09:02:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:33:02.499 09:02:04 -- accel/accel.sh@12 -- # build_accel_config 00:33:02.499 09:02:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:33:02.499 09:02:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:33:02.499 09:02:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:02.499 09:02:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:02.499 09:02:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:33:02.499 09:02:04 -- accel/accel.sh@40 -- # local IFS=, 00:33:02.499 09:02:04 -- accel/accel.sh@41 -- # jq -r . 00:33:02.499 [2024-04-18 09:02:04.461123] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:33:02.499 [2024-04-18 09:02:04.461379] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65083 ] 00:33:02.783 [2024-04-18 09:02:04.633118] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.066 [2024-04-18 09:02:04.958670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.334 09:02:05 -- accel/accel.sh@20 -- # val= 00:33:03.334 09:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # IFS=: 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # read -r var val 00:33:03.334 09:02:05 -- accel/accel.sh@20 -- # val= 00:33:03.334 09:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # IFS=: 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # read -r var val 00:33:03.334 09:02:05 -- accel/accel.sh@20 -- # val=0x1 00:33:03.334 09:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # IFS=: 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # read -r var val 00:33:03.334 09:02:05 -- accel/accel.sh@20 -- # val= 00:33:03.334 09:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # IFS=: 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # read -r var val 00:33:03.334 09:02:05 -- accel/accel.sh@20 -- # val= 00:33:03.334 09:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # IFS=: 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # read -r var val 00:33:03.334 09:02:05 -- accel/accel.sh@20 -- # val=fill 00:33:03.334 09:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:33:03.334 09:02:05 -- accel/accel.sh@23 -- # accel_opc=fill 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # IFS=: 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # read -r var val 00:33:03.334 09:02:05 -- accel/accel.sh@20 -- # val=0x80 00:33:03.334 09:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # IFS=: 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # read -r var val 00:33:03.334 09:02:05 -- accel/accel.sh@20 -- # val='4096 bytes' 00:33:03.334 09:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # IFS=: 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # read -r var val 00:33:03.334 09:02:05 -- accel/accel.sh@20 -- # val= 00:33:03.334 09:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # IFS=: 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # read -r var val 00:33:03.334 09:02:05 -- accel/accel.sh@20 -- # val=software 00:33:03.334 09:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:33:03.334 09:02:05 -- accel/accel.sh@22 -- # accel_module=software 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # IFS=: 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # read -r var val 00:33:03.334 09:02:05 -- accel/accel.sh@20 -- # val=64 00:33:03.334 09:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # IFS=: 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # read -r var val 00:33:03.334 09:02:05 -- accel/accel.sh@20 -- # val=64 00:33:03.334 09:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # IFS=: 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # read -r var val 00:33:03.334 09:02:05 -- accel/accel.sh@20 -- # val=1 00:33:03.334 09:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # IFS=: 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # read -r var val 00:33:03.334 09:02:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:33:03.334 09:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # IFS=: 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # read -r var val 00:33:03.334 09:02:05 -- accel/accel.sh@20 -- # val=Yes 00:33:03.334 09:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # IFS=: 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # read -r var val 00:33:03.334 09:02:05 -- accel/accel.sh@20 -- # val= 00:33:03.334 09:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # IFS=: 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # read -r var val 00:33:03.334 09:02:05 -- accel/accel.sh@20 -- # val= 00:33:03.334 09:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # IFS=: 00:33:03.334 09:02:05 -- accel/accel.sh@19 -- # read -r var val 00:33:05.872 09:02:07 -- accel/accel.sh@20 -- # val= 00:33:05.872 09:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:33:05.872 09:02:07 -- accel/accel.sh@19 -- # IFS=: 00:33:05.872 09:02:07 -- accel/accel.sh@19 -- # read -r var val 00:33:05.872 09:02:07 -- accel/accel.sh@20 -- # val= 00:33:05.872 09:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:33:05.872 09:02:07 -- accel/accel.sh@19 -- # IFS=: 00:33:05.872 09:02:07 -- accel/accel.sh@19 -- # read -r var val 00:33:05.872 09:02:07 -- accel/accel.sh@20 -- # val= 00:33:05.872 09:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:33:05.872 09:02:07 -- accel/accel.sh@19 -- # IFS=: 00:33:05.872 09:02:07 -- accel/accel.sh@19 -- # read -r var val 00:33:05.872 09:02:07 -- accel/accel.sh@20 -- # val= 00:33:05.872 09:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:33:05.872 09:02:07 -- accel/accel.sh@19 -- # IFS=: 00:33:05.872 09:02:07 -- accel/accel.sh@19 -- # read -r var val 00:33:05.872 09:02:07 -- accel/accel.sh@20 -- # val= 00:33:05.872 09:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:33:05.872 09:02:07 -- accel/accel.sh@19 -- # IFS=: 00:33:05.873 09:02:07 -- accel/accel.sh@19 -- # read -r var val 00:33:05.873 09:02:07 -- accel/accel.sh@20 -- # val= 00:33:05.873 09:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:33:05.873 09:02:07 -- accel/accel.sh@19 -- # IFS=: 00:33:05.873 09:02:07 -- accel/accel.sh@19 -- # read -r var val 00:33:05.873 09:02:07 -- accel/accel.sh@27 -- # [[ -n software ]] 00:33:05.873 09:02:07 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:33:05.873 09:02:07 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:05.873 00:33:05.873 real 0m2.964s 00:33:05.873 user 0m2.659s 00:33:05.873 sys 0m0.197s 00:33:05.873 09:02:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:05.873 09:02:07 -- common/autotest_common.sh@10 -- # set +x 00:33:05.873 ************************************ 00:33:05.873 END TEST accel_fill 00:33:05.873 ************************************ 00:33:05.873 09:02:07 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:33:05.873 09:02:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:33:05.873 09:02:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:05.873 09:02:07 -- common/autotest_common.sh@10 -- # set +x 00:33:05.873 ************************************ 00:33:05.873 START TEST accel_copy_crc32c 00:33:05.873 ************************************ 00:33:05.873 09:02:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:33:05.873 09:02:07 -- accel/accel.sh@16 -- # local accel_opc 00:33:05.873 09:02:07 -- accel/accel.sh@17 -- # local accel_module 00:33:05.873 09:02:07 -- accel/accel.sh@19 -- # IFS=: 00:33:05.873 09:02:07 -- accel/accel.sh@19 -- # read -r var val 00:33:05.873 09:02:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:33:05.873 09:02:07 -- accel/accel.sh@12 -- # build_accel_config 00:33:05.873 09:02:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:33:05.873 09:02:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:33:05.873 09:02:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:33:05.873 09:02:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:05.873 09:02:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:05.873 09:02:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:33:05.873 09:02:07 -- accel/accel.sh@40 -- # local IFS=, 00:33:05.873 09:02:07 -- accel/accel.sh@41 -- # jq -r . 00:33:05.873 [2024-04-18 09:02:07.560484] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:33:05.873 [2024-04-18 09:02:07.560777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65147 ] 00:33:05.873 [2024-04-18 09:02:07.732706] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.132 [2024-04-18 09:02:08.076801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.393 09:02:08 -- accel/accel.sh@20 -- # val= 00:33:06.393 09:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # IFS=: 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # read -r var val 00:33:06.393 09:02:08 -- accel/accel.sh@20 -- # val= 00:33:06.393 09:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # IFS=: 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # read -r var val 00:33:06.393 09:02:08 -- accel/accel.sh@20 -- # val=0x1 00:33:06.393 09:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # IFS=: 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # read -r var val 00:33:06.393 09:02:08 -- accel/accel.sh@20 -- # val= 00:33:06.393 09:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # IFS=: 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # read -r var val 00:33:06.393 09:02:08 -- accel/accel.sh@20 -- # val= 00:33:06.393 09:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # IFS=: 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # read -r var val 00:33:06.393 09:02:08 -- accel/accel.sh@20 -- # val=copy_crc32c 00:33:06.393 09:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:33:06.393 09:02:08 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # IFS=: 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # read -r var val 00:33:06.393 09:02:08 -- accel/accel.sh@20 -- # val=0 00:33:06.393 09:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # IFS=: 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # read -r var val 00:33:06.393 09:02:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:33:06.393 09:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # IFS=: 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # read -r var val 00:33:06.393 09:02:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:33:06.393 09:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # IFS=: 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # read -r var val 00:33:06.393 09:02:08 -- accel/accel.sh@20 -- # val= 00:33:06.393 09:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # IFS=: 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # read -r var val 00:33:06.393 09:02:08 -- accel/accel.sh@20 -- # val=software 00:33:06.393 09:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:33:06.393 09:02:08 -- accel/accel.sh@22 -- # accel_module=software 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # IFS=: 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # read -r var val 00:33:06.393 09:02:08 -- accel/accel.sh@20 -- # val=32 00:33:06.393 09:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # IFS=: 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # read -r var val 00:33:06.393 09:02:08 -- accel/accel.sh@20 -- # val=32 00:33:06.393 09:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # IFS=: 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # read -r var val 00:33:06.393 09:02:08 -- accel/accel.sh@20 -- # val=1 00:33:06.393 09:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # IFS=: 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # read -r var val 00:33:06.393 09:02:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:33:06.393 09:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # IFS=: 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # read -r var val 00:33:06.393 09:02:08 -- accel/accel.sh@20 -- # val=Yes 00:33:06.393 09:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # IFS=: 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # read -r var val 00:33:06.393 09:02:08 -- accel/accel.sh@20 -- # val= 00:33:06.393 09:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # IFS=: 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # read -r var val 00:33:06.393 09:02:08 -- accel/accel.sh@20 -- # val= 00:33:06.393 09:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # IFS=: 00:33:06.393 09:02:08 -- accel/accel.sh@19 -- # read -r var val 00:33:08.942 09:02:10 -- accel/accel.sh@20 -- # val= 00:33:08.942 09:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:33:08.942 09:02:10 -- accel/accel.sh@19 -- # IFS=: 00:33:08.942 09:02:10 -- accel/accel.sh@19 -- # read -r var val 00:33:08.942 09:02:10 -- accel/accel.sh@20 -- # val= 00:33:08.942 09:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:33:08.942 09:02:10 -- accel/accel.sh@19 -- # IFS=: 00:33:08.942 09:02:10 -- accel/accel.sh@19 -- # read -r var val 00:33:08.942 09:02:10 -- accel/accel.sh@20 -- # val= 00:33:08.942 09:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:33:08.942 09:02:10 -- accel/accel.sh@19 -- # IFS=: 00:33:08.942 09:02:10 -- accel/accel.sh@19 -- # read -r var val 00:33:08.942 09:02:10 -- accel/accel.sh@20 -- # val= 00:33:08.942 09:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:33:08.942 09:02:10 -- accel/accel.sh@19 -- # IFS=: 00:33:08.942 09:02:10 -- accel/accel.sh@19 -- # read -r var val 00:33:08.942 09:02:10 -- accel/accel.sh@20 -- # val= 00:33:08.942 09:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:33:08.942 09:02:10 -- accel/accel.sh@19 -- # IFS=: 00:33:08.942 09:02:10 -- accel/accel.sh@19 -- # read -r var val 00:33:08.942 09:02:10 -- accel/accel.sh@20 -- # val= 00:33:08.942 09:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:33:08.942 09:02:10 -- accel/accel.sh@19 -- # IFS=: 00:33:08.942 09:02:10 -- accel/accel.sh@19 -- # read -r var val 00:33:08.942 09:02:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:33:08.942 09:02:10 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:33:08.942 09:02:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:08.942 00:33:08.942 real 0m3.035s 00:33:08.942 user 0m2.733s 00:33:08.942 sys 0m0.199s 00:33:08.942 09:02:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:08.942 09:02:10 -- common/autotest_common.sh@10 -- # set +x 00:33:08.942 ************************************ 00:33:08.942 END TEST accel_copy_crc32c 00:33:08.942 ************************************ 00:33:08.942 09:02:10 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:33:08.942 09:02:10 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:33:08.942 09:02:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:08.942 09:02:10 -- common/autotest_common.sh@10 -- # set +x 00:33:08.942 ************************************ 00:33:08.942 START TEST accel_copy_crc32c_C2 00:33:08.942 ************************************ 00:33:08.942 09:02:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:33:08.942 09:02:10 -- accel/accel.sh@16 -- # local accel_opc 00:33:08.942 09:02:10 -- accel/accel.sh@17 -- # local accel_module 00:33:08.942 09:02:10 -- accel/accel.sh@19 -- # IFS=: 00:33:08.942 09:02:10 -- accel/accel.sh@19 -- # read -r var val 00:33:08.942 09:02:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:33:08.942 09:02:10 -- accel/accel.sh@12 -- # build_accel_config 00:33:08.942 09:02:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:33:08.942 09:02:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:33:08.942 09:02:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:33:08.942 09:02:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:08.942 09:02:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:08.942 09:02:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:33:08.942 09:02:10 -- accel/accel.sh@40 -- # local IFS=, 00:33:08.942 09:02:10 -- accel/accel.sh@41 -- # jq -r . 00:33:08.942 [2024-04-18 09:02:10.733062] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:33:08.942 [2024-04-18 09:02:10.733873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65203 ] 00:33:08.942 [2024-04-18 09:02:10.904161] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.199 [2024-04-18 09:02:11.174483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.458 09:02:11 -- accel/accel.sh@20 -- # val= 00:33:09.458 09:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:33:09.458 09:02:11 -- accel/accel.sh@19 -- # IFS=: 00:33:09.458 09:02:11 -- accel/accel.sh@19 -- # read -r var val 00:33:09.458 09:02:11 -- accel/accel.sh@20 -- # val= 00:33:09.458 09:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:33:09.458 09:02:11 -- accel/accel.sh@19 -- # IFS=: 00:33:09.458 09:02:11 -- accel/accel.sh@19 -- # read -r var val 00:33:09.458 09:02:11 -- accel/accel.sh@20 -- # val=0x1 00:33:09.458 09:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:33:09.458 09:02:11 -- accel/accel.sh@19 -- # IFS=: 00:33:09.458 09:02:11 -- accel/accel.sh@19 -- # read -r var val 00:33:09.458 09:02:11 -- accel/accel.sh@20 -- # val= 00:33:09.458 09:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:33:09.458 09:02:11 -- accel/accel.sh@19 -- # IFS=: 00:33:09.458 09:02:11 -- accel/accel.sh@19 -- # read -r var val 00:33:09.458 09:02:11 -- accel/accel.sh@20 -- # val= 00:33:09.458 09:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:33:09.458 09:02:11 -- accel/accel.sh@19 -- # IFS=: 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # read -r var val 00:33:09.459 09:02:11 -- accel/accel.sh@20 -- # val=copy_crc32c 00:33:09.459 09:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:33:09.459 09:02:11 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # IFS=: 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # read -r var val 00:33:09.459 09:02:11 -- accel/accel.sh@20 -- # val=0 00:33:09.459 09:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # IFS=: 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # read -r var val 00:33:09.459 09:02:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:33:09.459 09:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # IFS=: 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # read -r var val 00:33:09.459 09:02:11 -- accel/accel.sh@20 -- # val='8192 bytes' 00:33:09.459 09:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # IFS=: 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # read -r var val 00:33:09.459 09:02:11 -- accel/accel.sh@20 -- # val= 00:33:09.459 09:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # IFS=: 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # read -r var val 00:33:09.459 09:02:11 -- accel/accel.sh@20 -- # val=software 00:33:09.459 09:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:33:09.459 09:02:11 -- accel/accel.sh@22 -- # accel_module=software 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # IFS=: 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # read -r var val 00:33:09.459 09:02:11 -- accel/accel.sh@20 -- # val=32 00:33:09.459 09:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # IFS=: 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # read -r var val 00:33:09.459 09:02:11 -- accel/accel.sh@20 -- # val=32 00:33:09.459 09:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # IFS=: 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # read -r var val 00:33:09.459 09:02:11 -- accel/accel.sh@20 -- # val=1 00:33:09.459 09:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # IFS=: 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # read -r var val 00:33:09.459 09:02:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:33:09.459 09:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # IFS=: 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # read -r var val 00:33:09.459 09:02:11 -- accel/accel.sh@20 -- # val=Yes 00:33:09.459 09:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # IFS=: 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # read -r var val 00:33:09.459 09:02:11 -- accel/accel.sh@20 -- # val= 00:33:09.459 09:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # IFS=: 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # read -r var val 00:33:09.459 09:02:11 -- accel/accel.sh@20 -- # val= 00:33:09.459 09:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # IFS=: 00:33:09.459 09:02:11 -- accel/accel.sh@19 -- # read -r var val 00:33:12.029 09:02:13 -- accel/accel.sh@20 -- # val= 00:33:12.029 09:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.029 09:02:13 -- accel/accel.sh@19 -- # IFS=: 00:33:12.029 09:02:13 -- accel/accel.sh@19 -- # read -r var val 00:33:12.029 09:02:13 -- accel/accel.sh@20 -- # val= 00:33:12.029 09:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.029 09:02:13 -- accel/accel.sh@19 -- # IFS=: 00:33:12.029 09:02:13 -- accel/accel.sh@19 -- # read -r var val 00:33:12.029 09:02:13 -- accel/accel.sh@20 -- # val= 00:33:12.029 09:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.029 09:02:13 -- accel/accel.sh@19 -- # IFS=: 00:33:12.029 09:02:13 -- accel/accel.sh@19 -- # read -r var val 00:33:12.029 09:02:13 -- accel/accel.sh@20 -- # val= 00:33:12.029 09:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.029 09:02:13 -- accel/accel.sh@19 -- # IFS=: 00:33:12.029 09:02:13 -- accel/accel.sh@19 -- # read -r var val 00:33:12.029 09:02:13 -- accel/accel.sh@20 -- # val= 00:33:12.029 09:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.029 09:02:13 -- accel/accel.sh@19 -- # IFS=: 00:33:12.029 09:02:13 -- accel/accel.sh@19 -- # read -r var val 00:33:12.029 09:02:13 -- accel/accel.sh@20 -- # val= 00:33:12.029 09:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.029 09:02:13 -- accel/accel.sh@19 -- # IFS=: 00:33:12.029 09:02:13 -- accel/accel.sh@19 -- # read -r var val 00:33:12.029 09:02:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:33:12.029 09:02:13 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:33:12.029 09:02:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:12.029 00:33:12.029 real 0m2.966s 00:33:12.029 user 0m2.660s 00:33:12.029 sys 0m0.192s 00:33:12.029 09:02:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:12.029 09:02:13 -- common/autotest_common.sh@10 -- # set +x 00:33:12.029 ************************************ 00:33:12.029 END TEST accel_copy_crc32c_C2 00:33:12.029 ************************************ 00:33:12.029 09:02:13 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:33:12.029 09:02:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:33:12.029 09:02:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:12.029 09:02:13 -- common/autotest_common.sh@10 -- # set +x 00:33:12.029 ************************************ 00:33:12.029 START TEST accel_dualcast 00:33:12.029 ************************************ 00:33:12.029 09:02:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:33:12.029 09:02:13 -- accel/accel.sh@16 -- # local accel_opc 00:33:12.029 09:02:13 -- accel/accel.sh@17 -- # local accel_module 00:33:12.029 09:02:13 -- accel/accel.sh@19 -- # IFS=: 00:33:12.029 09:02:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:33:12.029 09:02:13 -- accel/accel.sh@19 -- # read -r var val 00:33:12.029 09:02:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:33:12.029 09:02:13 -- accel/accel.sh@12 -- # build_accel_config 00:33:12.029 09:02:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:33:12.029 09:02:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:33:12.029 09:02:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:12.029 09:02:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:12.029 09:02:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:33:12.029 09:02:13 -- accel/accel.sh@40 -- # local IFS=, 00:33:12.029 09:02:13 -- accel/accel.sh@41 -- # jq -r . 00:33:12.029 [2024-04-18 09:02:13.841799] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:33:12.030 [2024-04-18 09:02:13.842265] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65259 ] 00:33:12.030 [2024-04-18 09:02:14.043832] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.293 [2024-04-18 09:02:14.388855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.874 09:02:14 -- accel/accel.sh@20 -- # val= 00:33:12.874 09:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # IFS=: 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # read -r var val 00:33:12.874 09:02:14 -- accel/accel.sh@20 -- # val= 00:33:12.874 09:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # IFS=: 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # read -r var val 00:33:12.874 09:02:14 -- accel/accel.sh@20 -- # val=0x1 00:33:12.874 09:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # IFS=: 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # read -r var val 00:33:12.874 09:02:14 -- accel/accel.sh@20 -- # val= 00:33:12.874 09:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # IFS=: 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # read -r var val 00:33:12.874 09:02:14 -- accel/accel.sh@20 -- # val= 00:33:12.874 09:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # IFS=: 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # read -r var val 00:33:12.874 09:02:14 -- accel/accel.sh@20 -- # val=dualcast 00:33:12.874 09:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.874 09:02:14 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # IFS=: 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # read -r var val 00:33:12.874 09:02:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:33:12.874 09:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # IFS=: 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # read -r var val 00:33:12.874 09:02:14 -- accel/accel.sh@20 -- # val= 00:33:12.874 09:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # IFS=: 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # read -r var val 00:33:12.874 09:02:14 -- accel/accel.sh@20 -- # val=software 00:33:12.874 09:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.874 09:02:14 -- accel/accel.sh@22 -- # accel_module=software 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # IFS=: 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # read -r var val 00:33:12.874 09:02:14 -- accel/accel.sh@20 -- # val=32 00:33:12.874 09:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # IFS=: 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # read -r var val 00:33:12.874 09:02:14 -- accel/accel.sh@20 -- # val=32 00:33:12.874 09:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # IFS=: 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # read -r var val 00:33:12.874 09:02:14 -- accel/accel.sh@20 -- # val=1 00:33:12.874 09:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # IFS=: 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # read -r var val 00:33:12.874 09:02:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:33:12.874 09:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # IFS=: 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # read -r var val 00:33:12.874 09:02:14 -- accel/accel.sh@20 -- # val=Yes 00:33:12.874 09:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # IFS=: 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # read -r var val 00:33:12.874 09:02:14 -- accel/accel.sh@20 -- # val= 00:33:12.874 09:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # IFS=: 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # read -r var val 00:33:12.874 09:02:14 -- accel/accel.sh@20 -- # val= 00:33:12.874 09:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # IFS=: 00:33:12.874 09:02:14 -- accel/accel.sh@19 -- # read -r var val 00:33:14.776 09:02:16 -- accel/accel.sh@20 -- # val= 00:33:14.776 09:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:33:14.776 09:02:16 -- accel/accel.sh@19 -- # IFS=: 00:33:14.776 09:02:16 -- accel/accel.sh@19 -- # read -r var val 00:33:14.776 09:02:16 -- accel/accel.sh@20 -- # val= 00:33:14.776 09:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:33:14.776 09:02:16 -- accel/accel.sh@19 -- # IFS=: 00:33:14.776 09:02:16 -- accel/accel.sh@19 -- # read -r var val 00:33:14.776 09:02:16 -- accel/accel.sh@20 -- # val= 00:33:14.776 09:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:33:14.776 09:02:16 -- accel/accel.sh@19 -- # IFS=: 00:33:14.776 09:02:16 -- accel/accel.sh@19 -- # read -r var val 00:33:14.776 09:02:16 -- accel/accel.sh@20 -- # val= 00:33:14.776 09:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:33:14.776 09:02:16 -- accel/accel.sh@19 -- # IFS=: 00:33:14.776 09:02:16 -- accel/accel.sh@19 -- # read -r var val 00:33:14.776 09:02:16 -- accel/accel.sh@20 -- # val= 00:33:14.776 09:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:33:14.776 09:02:16 -- accel/accel.sh@19 -- # IFS=: 00:33:14.776 09:02:16 -- accel/accel.sh@19 -- # read -r var val 00:33:14.776 09:02:16 -- accel/accel.sh@20 -- # val= 00:33:14.776 09:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:33:14.776 09:02:16 -- accel/accel.sh@19 -- # IFS=: 00:33:14.776 09:02:16 -- accel/accel.sh@19 -- # read -r var val 00:33:14.776 09:02:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:33:14.776 09:02:16 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:33:14.776 09:02:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:14.776 00:33:14.776 real 0m3.103s 00:33:14.776 user 0m2.764s 00:33:14.776 sys 0m0.227s 00:33:14.776 09:02:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:14.776 ************************************ 00:33:14.776 END TEST accel_dualcast 00:33:14.776 ************************************ 00:33:14.776 09:02:16 -- common/autotest_common.sh@10 -- # set +x 00:33:15.035 09:02:16 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:33:15.035 09:02:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:33:15.035 09:02:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:15.035 09:02:16 -- common/autotest_common.sh@10 -- # set +x 00:33:15.035 ************************************ 00:33:15.035 START TEST accel_compare 00:33:15.035 ************************************ 00:33:15.035 09:02:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:33:15.035 09:02:16 -- accel/accel.sh@16 -- # local accel_opc 00:33:15.035 09:02:16 -- accel/accel.sh@17 -- # local accel_module 00:33:15.035 09:02:16 -- accel/accel.sh@19 -- # IFS=: 00:33:15.035 09:02:16 -- accel/accel.sh@19 -- # read -r var val 00:33:15.035 09:02:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:33:15.035 09:02:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:33:15.035 09:02:16 -- accel/accel.sh@12 -- # build_accel_config 00:33:15.035 09:02:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:33:15.035 09:02:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:33:15.035 09:02:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:15.035 09:02:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:15.035 09:02:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:33:15.035 09:02:17 -- accel/accel.sh@40 -- # local IFS=, 00:33:15.035 09:02:17 -- accel/accel.sh@41 -- # jq -r . 00:33:15.035 [2024-04-18 09:02:17.062665] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:33:15.035 [2024-04-18 09:02:17.063041] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65315 ] 00:33:15.293 [2024-04-18 09:02:17.249006] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.552 [2024-04-18 09:02:17.597044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.118 09:02:17 -- accel/accel.sh@20 -- # val= 00:33:16.118 09:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # IFS=: 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # read -r var val 00:33:16.118 09:02:17 -- accel/accel.sh@20 -- # val= 00:33:16.118 09:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # IFS=: 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # read -r var val 00:33:16.118 09:02:17 -- accel/accel.sh@20 -- # val=0x1 00:33:16.118 09:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # IFS=: 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # read -r var val 00:33:16.118 09:02:17 -- accel/accel.sh@20 -- # val= 00:33:16.118 09:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # IFS=: 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # read -r var val 00:33:16.118 09:02:17 -- accel/accel.sh@20 -- # val= 00:33:16.118 09:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # IFS=: 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # read -r var val 00:33:16.118 09:02:17 -- accel/accel.sh@20 -- # val=compare 00:33:16.118 09:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:33:16.118 09:02:17 -- accel/accel.sh@23 -- # accel_opc=compare 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # IFS=: 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # read -r var val 00:33:16.118 09:02:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:33:16.118 09:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # IFS=: 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # read -r var val 00:33:16.118 09:02:17 -- accel/accel.sh@20 -- # val= 00:33:16.118 09:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # IFS=: 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # read -r var val 00:33:16.118 09:02:17 -- accel/accel.sh@20 -- # val=software 00:33:16.118 09:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:33:16.118 09:02:17 -- accel/accel.sh@22 -- # accel_module=software 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # IFS=: 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # read -r var val 00:33:16.118 09:02:17 -- accel/accel.sh@20 -- # val=32 00:33:16.118 09:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # IFS=: 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # read -r var val 00:33:16.118 09:02:17 -- accel/accel.sh@20 -- # val=32 00:33:16.118 09:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # IFS=: 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # read -r var val 00:33:16.118 09:02:17 -- accel/accel.sh@20 -- # val=1 00:33:16.118 09:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # IFS=: 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # read -r var val 00:33:16.118 09:02:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:33:16.118 09:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # IFS=: 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # read -r var val 00:33:16.118 09:02:17 -- accel/accel.sh@20 -- # val=Yes 00:33:16.118 09:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # IFS=: 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # read -r var val 00:33:16.118 09:02:17 -- accel/accel.sh@20 -- # val= 00:33:16.118 09:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # IFS=: 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # read -r var val 00:33:16.118 09:02:17 -- accel/accel.sh@20 -- # val= 00:33:16.118 09:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # IFS=: 00:33:16.118 09:02:17 -- accel/accel.sh@19 -- # read -r var val 00:33:18.019 09:02:20 -- accel/accel.sh@20 -- # val= 00:33:18.019 09:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:33:18.019 09:02:20 -- accel/accel.sh@19 -- # IFS=: 00:33:18.019 09:02:20 -- accel/accel.sh@19 -- # read -r var val 00:33:18.019 09:02:20 -- accel/accel.sh@20 -- # val= 00:33:18.019 09:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:33:18.019 09:02:20 -- accel/accel.sh@19 -- # IFS=: 00:33:18.019 09:02:20 -- accel/accel.sh@19 -- # read -r var val 00:33:18.019 09:02:20 -- accel/accel.sh@20 -- # val= 00:33:18.019 09:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:33:18.019 09:02:20 -- accel/accel.sh@19 -- # IFS=: 00:33:18.019 09:02:20 -- accel/accel.sh@19 -- # read -r var val 00:33:18.019 09:02:20 -- accel/accel.sh@20 -- # val= 00:33:18.019 09:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:33:18.019 09:02:20 -- accel/accel.sh@19 -- # IFS=: 00:33:18.019 09:02:20 -- accel/accel.sh@19 -- # read -r var val 00:33:18.019 09:02:20 -- accel/accel.sh@20 -- # val= 00:33:18.019 09:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:33:18.019 09:02:20 -- accel/accel.sh@19 -- # IFS=: 00:33:18.019 09:02:20 -- accel/accel.sh@19 -- # read -r var val 00:33:18.019 09:02:20 -- accel/accel.sh@20 -- # val= 00:33:18.019 09:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:33:18.019 09:02:20 -- accel/accel.sh@19 -- # IFS=: 00:33:18.019 09:02:20 -- accel/accel.sh@19 -- # read -r var val 00:33:18.019 09:02:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:33:18.019 09:02:20 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:33:18.019 09:02:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:18.019 00:33:18.019 real 0m3.106s 00:33:18.019 user 0m2.776s 00:33:18.019 sys 0m0.223s 00:33:18.019 09:02:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:18.019 09:02:20 -- common/autotest_common.sh@10 -- # set +x 00:33:18.019 ************************************ 00:33:18.019 END TEST accel_compare 00:33:18.019 ************************************ 00:33:18.276 09:02:20 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:33:18.276 09:02:20 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:33:18.276 09:02:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:18.276 09:02:20 -- common/autotest_common.sh@10 -- # set +x 00:33:18.276 ************************************ 00:33:18.276 START TEST accel_xor 00:33:18.276 ************************************ 00:33:18.276 09:02:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:33:18.276 09:02:20 -- accel/accel.sh@16 -- # local accel_opc 00:33:18.276 09:02:20 -- accel/accel.sh@17 -- # local accel_module 00:33:18.276 09:02:20 -- accel/accel.sh@19 -- # IFS=: 00:33:18.276 09:02:20 -- accel/accel.sh@19 -- # read -r var val 00:33:18.276 09:02:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:33:18.276 09:02:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:33:18.276 09:02:20 -- accel/accel.sh@12 -- # build_accel_config 00:33:18.276 09:02:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:33:18.276 09:02:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:33:18.276 09:02:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:18.276 09:02:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:18.276 09:02:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:33:18.276 09:02:20 -- accel/accel.sh@40 -- # local IFS=, 00:33:18.276 09:02:20 -- accel/accel.sh@41 -- # jq -r . 00:33:18.276 [2024-04-18 09:02:20.272839] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:33:18.276 [2024-04-18 09:02:20.273185] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65371 ] 00:33:18.533 [2024-04-18 09:02:20.442484] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.883 [2024-04-18 09:02:20.732692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.142 09:02:21 -- accel/accel.sh@20 -- # val= 00:33:19.142 09:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # IFS=: 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # read -r var val 00:33:19.142 09:02:21 -- accel/accel.sh@20 -- # val= 00:33:19.142 09:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # IFS=: 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # read -r var val 00:33:19.142 09:02:21 -- accel/accel.sh@20 -- # val=0x1 00:33:19.142 09:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # IFS=: 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # read -r var val 00:33:19.142 09:02:21 -- accel/accel.sh@20 -- # val= 00:33:19.142 09:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # IFS=: 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # read -r var val 00:33:19.142 09:02:21 -- accel/accel.sh@20 -- # val= 00:33:19.142 09:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # IFS=: 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # read -r var val 00:33:19.142 09:02:21 -- accel/accel.sh@20 -- # val=xor 00:33:19.142 09:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:33:19.142 09:02:21 -- accel/accel.sh@23 -- # accel_opc=xor 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # IFS=: 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # read -r var val 00:33:19.142 09:02:21 -- accel/accel.sh@20 -- # val=2 00:33:19.142 09:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # IFS=: 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # read -r var val 00:33:19.142 09:02:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:33:19.142 09:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # IFS=: 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # read -r var val 00:33:19.142 09:02:21 -- accel/accel.sh@20 -- # val= 00:33:19.142 09:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # IFS=: 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # read -r var val 00:33:19.142 09:02:21 -- accel/accel.sh@20 -- # val=software 00:33:19.142 09:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:33:19.142 09:02:21 -- accel/accel.sh@22 -- # accel_module=software 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # IFS=: 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # read -r var val 00:33:19.142 09:02:21 -- accel/accel.sh@20 -- # val=32 00:33:19.142 09:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # IFS=: 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # read -r var val 00:33:19.142 09:02:21 -- accel/accel.sh@20 -- # val=32 00:33:19.142 09:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # IFS=: 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # read -r var val 00:33:19.142 09:02:21 -- accel/accel.sh@20 -- # val=1 00:33:19.142 09:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # IFS=: 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # read -r var val 00:33:19.142 09:02:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:33:19.142 09:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # IFS=: 00:33:19.142 09:02:21 -- accel/accel.sh@19 -- # read -r var val 00:33:19.142 09:02:21 -- accel/accel.sh@20 -- # val=Yes 00:33:19.143 09:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:33:19.143 09:02:21 -- accel/accel.sh@19 -- # IFS=: 00:33:19.143 09:02:21 -- accel/accel.sh@19 -- # read -r var val 00:33:19.143 09:02:21 -- accel/accel.sh@20 -- # val= 00:33:19.143 09:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:33:19.143 09:02:21 -- accel/accel.sh@19 -- # IFS=: 00:33:19.143 09:02:21 -- accel/accel.sh@19 -- # read -r var val 00:33:19.143 09:02:21 -- accel/accel.sh@20 -- # val= 00:33:19.143 09:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:33:19.143 09:02:21 -- accel/accel.sh@19 -- # IFS=: 00:33:19.143 09:02:21 -- accel/accel.sh@19 -- # read -r var val 00:33:21.698 09:02:23 -- accel/accel.sh@20 -- # val= 00:33:21.698 09:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:33:21.698 09:02:23 -- accel/accel.sh@19 -- # IFS=: 00:33:21.698 09:02:23 -- accel/accel.sh@19 -- # read -r var val 00:33:21.698 09:02:23 -- accel/accel.sh@20 -- # val= 00:33:21.698 09:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:33:21.698 09:02:23 -- accel/accel.sh@19 -- # IFS=: 00:33:21.698 09:02:23 -- accel/accel.sh@19 -- # read -r var val 00:33:21.698 09:02:23 -- accel/accel.sh@20 -- # val= 00:33:21.698 09:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:33:21.698 09:02:23 -- accel/accel.sh@19 -- # IFS=: 00:33:21.698 09:02:23 -- accel/accel.sh@19 -- # read -r var val 00:33:21.698 09:02:23 -- accel/accel.sh@20 -- # val= 00:33:21.698 09:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:33:21.698 09:02:23 -- accel/accel.sh@19 -- # IFS=: 00:33:21.698 09:02:23 -- accel/accel.sh@19 -- # read -r var val 00:33:21.698 09:02:23 -- accel/accel.sh@20 -- # val= 00:33:21.698 09:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:33:21.698 09:02:23 -- accel/accel.sh@19 -- # IFS=: 00:33:21.698 09:02:23 -- accel/accel.sh@19 -- # read -r var val 00:33:21.698 09:02:23 -- accel/accel.sh@20 -- # val= 00:33:21.698 09:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:33:21.698 09:02:23 -- accel/accel.sh@19 -- # IFS=: 00:33:21.698 09:02:23 -- accel/accel.sh@19 -- # read -r var val 00:33:21.698 09:02:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:33:21.698 09:02:23 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:33:21.698 09:02:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:21.698 00:33:21.698 real 0m3.009s 00:33:21.698 user 0m2.690s 00:33:21.698 sys 0m0.209s 00:33:21.698 09:02:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:21.698 09:02:23 -- common/autotest_common.sh@10 -- # set +x 00:33:21.698 ************************************ 00:33:21.698 END TEST accel_xor 00:33:21.698 ************************************ 00:33:21.698 09:02:23 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:33:21.698 09:02:23 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:33:21.698 09:02:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:21.698 09:02:23 -- common/autotest_common.sh@10 -- # set +x 00:33:21.698 ************************************ 00:33:21.698 START TEST accel_xor 00:33:21.698 ************************************ 00:33:21.698 09:02:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:33:21.698 09:02:23 -- accel/accel.sh@16 -- # local accel_opc 00:33:21.698 09:02:23 -- accel/accel.sh@17 -- # local accel_module 00:33:21.699 09:02:23 -- accel/accel.sh@19 -- # IFS=: 00:33:21.699 09:02:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:33:21.699 09:02:23 -- accel/accel.sh@19 -- # read -r var val 00:33:21.699 09:02:23 -- accel/accel.sh@12 -- # build_accel_config 00:33:21.699 09:02:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:33:21.699 09:02:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:33:21.699 09:02:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:33:21.699 09:02:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:21.699 09:02:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:21.699 09:02:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:33:21.699 09:02:23 -- accel/accel.sh@40 -- # local IFS=, 00:33:21.699 09:02:23 -- accel/accel.sh@41 -- # jq -r . 00:33:21.699 [2024-04-18 09:02:23.398606] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:33:21.699 [2024-04-18 09:02:23.398923] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65427 ] 00:33:21.699 [2024-04-18 09:02:23.570602] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.968 [2024-04-18 09:02:23.869954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.257 09:02:24 -- accel/accel.sh@20 -- # val= 00:33:22.257 09:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # IFS=: 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # read -r var val 00:33:22.257 09:02:24 -- accel/accel.sh@20 -- # val= 00:33:22.257 09:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # IFS=: 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # read -r var val 00:33:22.257 09:02:24 -- accel/accel.sh@20 -- # val=0x1 00:33:22.257 09:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # IFS=: 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # read -r var val 00:33:22.257 09:02:24 -- accel/accel.sh@20 -- # val= 00:33:22.257 09:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # IFS=: 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # read -r var val 00:33:22.257 09:02:24 -- accel/accel.sh@20 -- # val= 00:33:22.257 09:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # IFS=: 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # read -r var val 00:33:22.257 09:02:24 -- accel/accel.sh@20 -- # val=xor 00:33:22.257 09:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:33:22.257 09:02:24 -- accel/accel.sh@23 -- # accel_opc=xor 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # IFS=: 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # read -r var val 00:33:22.257 09:02:24 -- accel/accel.sh@20 -- # val=3 00:33:22.257 09:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # IFS=: 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # read -r var val 00:33:22.257 09:02:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:33:22.257 09:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # IFS=: 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # read -r var val 00:33:22.257 09:02:24 -- accel/accel.sh@20 -- # val= 00:33:22.257 09:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # IFS=: 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # read -r var val 00:33:22.257 09:02:24 -- accel/accel.sh@20 -- # val=software 00:33:22.257 09:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:33:22.257 09:02:24 -- accel/accel.sh@22 -- # accel_module=software 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # IFS=: 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # read -r var val 00:33:22.257 09:02:24 -- accel/accel.sh@20 -- # val=32 00:33:22.257 09:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # IFS=: 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # read -r var val 00:33:22.257 09:02:24 -- accel/accel.sh@20 -- # val=32 00:33:22.257 09:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # IFS=: 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # read -r var val 00:33:22.257 09:02:24 -- accel/accel.sh@20 -- # val=1 00:33:22.257 09:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # IFS=: 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # read -r var val 00:33:22.257 09:02:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:33:22.257 09:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # IFS=: 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # read -r var val 00:33:22.257 09:02:24 -- accel/accel.sh@20 -- # val=Yes 00:33:22.257 09:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # IFS=: 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # read -r var val 00:33:22.257 09:02:24 -- accel/accel.sh@20 -- # val= 00:33:22.257 09:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # IFS=: 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # read -r var val 00:33:22.257 09:02:24 -- accel/accel.sh@20 -- # val= 00:33:22.257 09:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # IFS=: 00:33:22.257 09:02:24 -- accel/accel.sh@19 -- # read -r var val 00:33:24.799 09:02:26 -- accel/accel.sh@20 -- # val= 00:33:24.799 09:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:33:24.799 09:02:26 -- accel/accel.sh@19 -- # IFS=: 00:33:24.799 09:02:26 -- accel/accel.sh@19 -- # read -r var val 00:33:24.799 09:02:26 -- accel/accel.sh@20 -- # val= 00:33:24.799 09:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:33:24.799 09:02:26 -- accel/accel.sh@19 -- # IFS=: 00:33:24.799 09:02:26 -- accel/accel.sh@19 -- # read -r var val 00:33:24.799 09:02:26 -- accel/accel.sh@20 -- # val= 00:33:24.799 09:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:33:24.799 09:02:26 -- accel/accel.sh@19 -- # IFS=: 00:33:24.799 09:02:26 -- accel/accel.sh@19 -- # read -r var val 00:33:24.799 09:02:26 -- accel/accel.sh@20 -- # val= 00:33:24.799 09:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:33:24.799 09:02:26 -- accel/accel.sh@19 -- # IFS=: 00:33:24.799 09:02:26 -- accel/accel.sh@19 -- # read -r var val 00:33:24.799 09:02:26 -- accel/accel.sh@20 -- # val= 00:33:24.799 09:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:33:24.799 09:02:26 -- accel/accel.sh@19 -- # IFS=: 00:33:24.799 09:02:26 -- accel/accel.sh@19 -- # read -r var val 00:33:24.799 09:02:26 -- accel/accel.sh@20 -- # val= 00:33:24.799 09:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:33:24.799 09:02:26 -- accel/accel.sh@19 -- # IFS=: 00:33:24.799 09:02:26 -- accel/accel.sh@19 -- # read -r var val 00:33:24.799 09:02:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:33:24.799 09:02:26 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:33:24.799 09:02:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:24.799 00:33:24.799 real 0m3.006s 00:33:24.799 user 0m2.703s 00:33:24.799 sys 0m0.197s 00:33:24.799 09:02:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:24.799 09:02:26 -- common/autotest_common.sh@10 -- # set +x 00:33:24.799 ************************************ 00:33:24.799 END TEST accel_xor 00:33:24.799 ************************************ 00:33:24.799 09:02:26 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:33:24.799 09:02:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:33:24.799 09:02:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:24.799 09:02:26 -- common/autotest_common.sh@10 -- # set +x 00:33:24.799 ************************************ 00:33:24.799 START TEST accel_dif_verify 00:33:24.799 ************************************ 00:33:24.799 09:02:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:33:24.799 09:02:26 -- accel/accel.sh@16 -- # local accel_opc 00:33:24.799 09:02:26 -- accel/accel.sh@17 -- # local accel_module 00:33:24.799 09:02:26 -- accel/accel.sh@19 -- # IFS=: 00:33:24.799 09:02:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:33:24.799 09:02:26 -- accel/accel.sh@19 -- # read -r var val 00:33:24.799 09:02:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:33:24.799 09:02:26 -- accel/accel.sh@12 -- # build_accel_config 00:33:24.799 09:02:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:33:24.799 09:02:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:33:24.799 09:02:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:24.799 09:02:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:24.799 09:02:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:33:24.799 09:02:26 -- accel/accel.sh@40 -- # local IFS=, 00:33:24.799 09:02:26 -- accel/accel.sh@41 -- # jq -r . 00:33:24.799 [2024-04-18 09:02:26.514579] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:33:24.799 [2024-04-18 09:02:26.514922] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65483 ] 00:33:24.799 [2024-04-18 09:02:26.695064] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.057 [2024-04-18 09:02:26.970226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.315 09:02:27 -- accel/accel.sh@20 -- # val= 00:33:25.315 09:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # IFS=: 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # read -r var val 00:33:25.315 09:02:27 -- accel/accel.sh@20 -- # val= 00:33:25.315 09:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # IFS=: 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # read -r var val 00:33:25.315 09:02:27 -- accel/accel.sh@20 -- # val=0x1 00:33:25.315 09:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # IFS=: 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # read -r var val 00:33:25.315 09:02:27 -- accel/accel.sh@20 -- # val= 00:33:25.315 09:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # IFS=: 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # read -r var val 00:33:25.315 09:02:27 -- accel/accel.sh@20 -- # val= 00:33:25.315 09:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # IFS=: 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # read -r var val 00:33:25.315 09:02:27 -- accel/accel.sh@20 -- # val=dif_verify 00:33:25.315 09:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:33:25.315 09:02:27 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # IFS=: 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # read -r var val 00:33:25.315 09:02:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:33:25.315 09:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # IFS=: 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # read -r var val 00:33:25.315 09:02:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:33:25.315 09:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # IFS=: 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # read -r var val 00:33:25.315 09:02:27 -- accel/accel.sh@20 -- # val='512 bytes' 00:33:25.315 09:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # IFS=: 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # read -r var val 00:33:25.315 09:02:27 -- accel/accel.sh@20 -- # val='8 bytes' 00:33:25.315 09:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # IFS=: 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # read -r var val 00:33:25.315 09:02:27 -- accel/accel.sh@20 -- # val= 00:33:25.315 09:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:33:25.315 09:02:27 -- accel/accel.sh@19 -- # IFS=: 00:33:25.316 09:02:27 -- accel/accel.sh@19 -- # read -r var val 00:33:25.316 09:02:27 -- accel/accel.sh@20 -- # val=software 00:33:25.316 09:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:33:25.316 09:02:27 -- accel/accel.sh@22 -- # accel_module=software 00:33:25.316 09:02:27 -- accel/accel.sh@19 -- # IFS=: 00:33:25.316 09:02:27 -- accel/accel.sh@19 -- # read -r var val 00:33:25.316 09:02:27 -- accel/accel.sh@20 -- # val=32 00:33:25.316 09:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:33:25.316 09:02:27 -- accel/accel.sh@19 -- # IFS=: 00:33:25.316 09:02:27 -- accel/accel.sh@19 -- # read -r var val 00:33:25.316 09:02:27 -- accel/accel.sh@20 -- # val=32 00:33:25.316 09:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:33:25.316 09:02:27 -- accel/accel.sh@19 -- # IFS=: 00:33:25.316 09:02:27 -- accel/accel.sh@19 -- # read -r var val 00:33:25.316 09:02:27 -- accel/accel.sh@20 -- # val=1 00:33:25.316 09:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:33:25.316 09:02:27 -- accel/accel.sh@19 -- # IFS=: 00:33:25.316 09:02:27 -- accel/accel.sh@19 -- # read -r var val 00:33:25.316 09:02:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:33:25.316 09:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:33:25.316 09:02:27 -- accel/accel.sh@19 -- # IFS=: 00:33:25.316 09:02:27 -- accel/accel.sh@19 -- # read -r var val 00:33:25.316 09:02:27 -- accel/accel.sh@20 -- # val=No 00:33:25.316 09:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:33:25.316 09:02:27 -- accel/accel.sh@19 -- # IFS=: 00:33:25.316 09:02:27 -- accel/accel.sh@19 -- # read -r var val 00:33:25.316 09:02:27 -- accel/accel.sh@20 -- # val= 00:33:25.316 09:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:33:25.316 09:02:27 -- accel/accel.sh@19 -- # IFS=: 00:33:25.316 09:02:27 -- accel/accel.sh@19 -- # read -r var val 00:33:25.316 09:02:27 -- accel/accel.sh@20 -- # val= 00:33:25.316 09:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:33:25.316 09:02:27 -- accel/accel.sh@19 -- # IFS=: 00:33:25.316 09:02:27 -- accel/accel.sh@19 -- # read -r var val 00:33:27.845 09:02:29 -- accel/accel.sh@20 -- # val= 00:33:27.845 09:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:33:27.845 09:02:29 -- accel/accel.sh@19 -- # IFS=: 00:33:27.845 09:02:29 -- accel/accel.sh@19 -- # read -r var val 00:33:27.845 09:02:29 -- accel/accel.sh@20 -- # val= 00:33:27.845 09:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:33:27.845 09:02:29 -- accel/accel.sh@19 -- # IFS=: 00:33:27.845 09:02:29 -- accel/accel.sh@19 -- # read -r var val 00:33:27.845 09:02:29 -- accel/accel.sh@20 -- # val= 00:33:27.845 09:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:33:27.845 09:02:29 -- accel/accel.sh@19 -- # IFS=: 00:33:27.845 09:02:29 -- accel/accel.sh@19 -- # read -r var val 00:33:27.845 09:02:29 -- accel/accel.sh@20 -- # val= 00:33:27.845 09:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:33:27.845 09:02:29 -- accel/accel.sh@19 -- # IFS=: 00:33:27.845 09:02:29 -- accel/accel.sh@19 -- # read -r var val 00:33:27.845 09:02:29 -- accel/accel.sh@20 -- # val= 00:33:27.845 09:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:33:27.845 09:02:29 -- accel/accel.sh@19 -- # IFS=: 00:33:27.845 09:02:29 -- accel/accel.sh@19 -- # read -r var val 00:33:27.845 09:02:29 -- accel/accel.sh@20 -- # val= 00:33:27.845 09:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:33:27.845 09:02:29 -- accel/accel.sh@19 -- # IFS=: 00:33:27.845 09:02:29 -- accel/accel.sh@19 -- # read -r var val 00:33:27.845 09:02:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:33:27.845 09:02:29 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:33:27.845 09:02:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:27.845 00:33:27.845 real 0m2.964s 00:33:27.845 user 0m2.644s 00:33:27.845 sys 0m0.217s 00:33:27.845 09:02:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:27.845 09:02:29 -- common/autotest_common.sh@10 -- # set +x 00:33:27.845 ************************************ 00:33:27.845 END TEST accel_dif_verify 00:33:27.845 ************************************ 00:33:27.845 09:02:29 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:33:27.845 09:02:29 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:33:27.845 09:02:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:27.845 09:02:29 -- common/autotest_common.sh@10 -- # set +x 00:33:27.845 ************************************ 00:33:27.845 START TEST accel_dif_generate 00:33:27.845 ************************************ 00:33:27.845 09:02:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:33:27.845 09:02:29 -- accel/accel.sh@16 -- # local accel_opc 00:33:27.845 09:02:29 -- accel/accel.sh@17 -- # local accel_module 00:33:27.845 09:02:29 -- accel/accel.sh@19 -- # IFS=: 00:33:27.845 09:02:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:33:27.845 09:02:29 -- accel/accel.sh@19 -- # read -r var val 00:33:27.845 09:02:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:33:27.845 09:02:29 -- accel/accel.sh@12 -- # build_accel_config 00:33:27.845 09:02:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:33:27.845 09:02:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:33:27.845 09:02:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:27.845 09:02:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:27.845 09:02:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:33:27.845 09:02:29 -- accel/accel.sh@40 -- # local IFS=, 00:33:27.845 09:02:29 -- accel/accel.sh@41 -- # jq -r . 00:33:27.845 [2024-04-18 09:02:29.601063] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:33:27.845 [2024-04-18 09:02:29.601389] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65534 ] 00:33:27.845 [2024-04-18 09:02:29.796791] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.103 [2024-04-18 09:02:30.070161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.362 09:02:30 -- accel/accel.sh@20 -- # val= 00:33:28.362 09:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # IFS=: 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # read -r var val 00:33:28.362 09:02:30 -- accel/accel.sh@20 -- # val= 00:33:28.362 09:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # IFS=: 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # read -r var val 00:33:28.362 09:02:30 -- accel/accel.sh@20 -- # val=0x1 00:33:28.362 09:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # IFS=: 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # read -r var val 00:33:28.362 09:02:30 -- accel/accel.sh@20 -- # val= 00:33:28.362 09:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # IFS=: 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # read -r var val 00:33:28.362 09:02:30 -- accel/accel.sh@20 -- # val= 00:33:28.362 09:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # IFS=: 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # read -r var val 00:33:28.362 09:02:30 -- accel/accel.sh@20 -- # val=dif_generate 00:33:28.362 09:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:33:28.362 09:02:30 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # IFS=: 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # read -r var val 00:33:28.362 09:02:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:33:28.362 09:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # IFS=: 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # read -r var val 00:33:28.362 09:02:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:33:28.362 09:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # IFS=: 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # read -r var val 00:33:28.362 09:02:30 -- accel/accel.sh@20 -- # val='512 bytes' 00:33:28.362 09:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # IFS=: 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # read -r var val 00:33:28.362 09:02:30 -- accel/accel.sh@20 -- # val='8 bytes' 00:33:28.362 09:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # IFS=: 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # read -r var val 00:33:28.362 09:02:30 -- accel/accel.sh@20 -- # val= 00:33:28.362 09:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # IFS=: 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # read -r var val 00:33:28.362 09:02:30 -- accel/accel.sh@20 -- # val=software 00:33:28.362 09:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:33:28.362 09:02:30 -- accel/accel.sh@22 -- # accel_module=software 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # IFS=: 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # read -r var val 00:33:28.362 09:02:30 -- accel/accel.sh@20 -- # val=32 00:33:28.362 09:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # IFS=: 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # read -r var val 00:33:28.362 09:02:30 -- accel/accel.sh@20 -- # val=32 00:33:28.362 09:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # IFS=: 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # read -r var val 00:33:28.362 09:02:30 -- accel/accel.sh@20 -- # val=1 00:33:28.362 09:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # IFS=: 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # read -r var val 00:33:28.362 09:02:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:33:28.362 09:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # IFS=: 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # read -r var val 00:33:28.362 09:02:30 -- accel/accel.sh@20 -- # val=No 00:33:28.362 09:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # IFS=: 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # read -r var val 00:33:28.362 09:02:30 -- accel/accel.sh@20 -- # val= 00:33:28.362 09:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # IFS=: 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # read -r var val 00:33:28.362 09:02:30 -- accel/accel.sh@20 -- # val= 00:33:28.362 09:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # IFS=: 00:33:28.362 09:02:30 -- accel/accel.sh@19 -- # read -r var val 00:33:30.895 09:02:32 -- accel/accel.sh@20 -- # val= 00:33:30.895 09:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:33:30.895 09:02:32 -- accel/accel.sh@19 -- # IFS=: 00:33:30.895 09:02:32 -- accel/accel.sh@19 -- # read -r var val 00:33:30.895 09:02:32 -- accel/accel.sh@20 -- # val= 00:33:30.895 09:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:33:30.895 09:02:32 -- accel/accel.sh@19 -- # IFS=: 00:33:30.895 09:02:32 -- accel/accel.sh@19 -- # read -r var val 00:33:30.895 09:02:32 -- accel/accel.sh@20 -- # val= 00:33:30.895 09:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:33:30.895 09:02:32 -- accel/accel.sh@19 -- # IFS=: 00:33:30.895 09:02:32 -- accel/accel.sh@19 -- # read -r var val 00:33:30.895 09:02:32 -- accel/accel.sh@20 -- # val= 00:33:30.895 09:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:33:30.895 09:02:32 -- accel/accel.sh@19 -- # IFS=: 00:33:30.895 09:02:32 -- accel/accel.sh@19 -- # read -r var val 00:33:30.895 09:02:32 -- accel/accel.sh@20 -- # val= 00:33:30.895 09:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:33:30.895 09:02:32 -- accel/accel.sh@19 -- # IFS=: 00:33:30.895 09:02:32 -- accel/accel.sh@19 -- # read -r var val 00:33:30.895 09:02:32 -- accel/accel.sh@20 -- # val= 00:33:30.895 09:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:33:30.895 09:02:32 -- accel/accel.sh@19 -- # IFS=: 00:33:30.895 09:02:32 -- accel/accel.sh@19 -- # read -r var val 00:33:30.895 ************************************ 00:33:30.895 END TEST accel_dif_generate 00:33:30.895 ************************************ 00:33:30.895 09:02:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:33:30.895 09:02:32 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:33:30.895 09:02:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:30.895 00:33:30.895 real 0m2.986s 00:33:30.895 user 0m2.656s 00:33:30.895 sys 0m0.223s 00:33:30.895 09:02:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:30.895 09:02:32 -- common/autotest_common.sh@10 -- # set +x 00:33:30.895 09:02:32 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:33:30.895 09:02:32 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:33:30.895 09:02:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:30.895 09:02:32 -- common/autotest_common.sh@10 -- # set +x 00:33:30.895 ************************************ 00:33:30.895 START TEST accel_dif_generate_copy 00:33:30.895 ************************************ 00:33:30.895 09:02:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:33:30.895 09:02:32 -- accel/accel.sh@16 -- # local accel_opc 00:33:30.895 09:02:32 -- accel/accel.sh@17 -- # local accel_module 00:33:30.895 09:02:32 -- accel/accel.sh@19 -- # IFS=: 00:33:30.895 09:02:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:33:30.895 09:02:32 -- accel/accel.sh@19 -- # read -r var val 00:33:30.895 09:02:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:33:30.895 09:02:32 -- accel/accel.sh@12 -- # build_accel_config 00:33:30.895 09:02:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:33:30.895 09:02:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:33:30.895 09:02:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:30.895 09:02:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:30.895 09:02:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:33:30.895 09:02:32 -- accel/accel.sh@40 -- # local IFS=, 00:33:30.895 09:02:32 -- accel/accel.sh@41 -- # jq -r . 00:33:30.895 [2024-04-18 09:02:32.724968] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:33:30.896 [2024-04-18 09:02:32.725320] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65591 ] 00:33:30.896 [2024-04-18 09:02:32.893513] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.153 [2024-04-18 09:02:33.161863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.455 09:02:33 -- accel/accel.sh@20 -- # val= 00:33:31.455 09:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:33:31.455 09:02:33 -- accel/accel.sh@19 -- # IFS=: 00:33:31.455 09:02:33 -- accel/accel.sh@19 -- # read -r var val 00:33:31.455 09:02:33 -- accel/accel.sh@20 -- # val= 00:33:31.455 09:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:33:31.455 09:02:33 -- accel/accel.sh@19 -- # IFS=: 00:33:31.455 09:02:33 -- accel/accel.sh@19 -- # read -r var val 00:33:31.455 09:02:33 -- accel/accel.sh@20 -- # val=0x1 00:33:31.455 09:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:33:31.455 09:02:33 -- accel/accel.sh@19 -- # IFS=: 00:33:31.455 09:02:33 -- accel/accel.sh@19 -- # read -r var val 00:33:31.455 09:02:33 -- accel/accel.sh@20 -- # val= 00:33:31.455 09:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:33:31.455 09:02:33 -- accel/accel.sh@19 -- # IFS=: 00:33:31.455 09:02:33 -- accel/accel.sh@19 -- # read -r var val 00:33:31.455 09:02:33 -- accel/accel.sh@20 -- # val= 00:33:31.455 09:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # IFS=: 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # read -r var val 00:33:31.456 09:02:33 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:33:31.456 09:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:33:31.456 09:02:33 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # IFS=: 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # read -r var val 00:33:31.456 09:02:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:33:31.456 09:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # IFS=: 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # read -r var val 00:33:31.456 09:02:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:33:31.456 09:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # IFS=: 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # read -r var val 00:33:31.456 09:02:33 -- accel/accel.sh@20 -- # val= 00:33:31.456 09:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # IFS=: 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # read -r var val 00:33:31.456 09:02:33 -- accel/accel.sh@20 -- # val=software 00:33:31.456 09:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:33:31.456 09:02:33 -- accel/accel.sh@22 -- # accel_module=software 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # IFS=: 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # read -r var val 00:33:31.456 09:02:33 -- accel/accel.sh@20 -- # val=32 00:33:31.456 09:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # IFS=: 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # read -r var val 00:33:31.456 09:02:33 -- accel/accel.sh@20 -- # val=32 00:33:31.456 09:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # IFS=: 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # read -r var val 00:33:31.456 09:02:33 -- accel/accel.sh@20 -- # val=1 00:33:31.456 09:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # IFS=: 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # read -r var val 00:33:31.456 09:02:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:33:31.456 09:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # IFS=: 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # read -r var val 00:33:31.456 09:02:33 -- accel/accel.sh@20 -- # val=No 00:33:31.456 09:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # IFS=: 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # read -r var val 00:33:31.456 09:02:33 -- accel/accel.sh@20 -- # val= 00:33:31.456 09:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # IFS=: 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # read -r var val 00:33:31.456 09:02:33 -- accel/accel.sh@20 -- # val= 00:33:31.456 09:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # IFS=: 00:33:31.456 09:02:33 -- accel/accel.sh@19 -- # read -r var val 00:33:34.026 09:02:35 -- accel/accel.sh@20 -- # val= 00:33:34.026 09:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.026 09:02:35 -- accel/accel.sh@19 -- # IFS=: 00:33:34.026 09:02:35 -- accel/accel.sh@19 -- # read -r var val 00:33:34.026 09:02:35 -- accel/accel.sh@20 -- # val= 00:33:34.026 09:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.026 09:02:35 -- accel/accel.sh@19 -- # IFS=: 00:33:34.026 09:02:35 -- accel/accel.sh@19 -- # read -r var val 00:33:34.026 09:02:35 -- accel/accel.sh@20 -- # val= 00:33:34.026 09:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.026 09:02:35 -- accel/accel.sh@19 -- # IFS=: 00:33:34.026 09:02:35 -- accel/accel.sh@19 -- # read -r var val 00:33:34.026 09:02:35 -- accel/accel.sh@20 -- # val= 00:33:34.026 09:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.026 09:02:35 -- accel/accel.sh@19 -- # IFS=: 00:33:34.026 09:02:35 -- accel/accel.sh@19 -- # read -r var val 00:33:34.026 09:02:35 -- accel/accel.sh@20 -- # val= 00:33:34.026 09:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.026 09:02:35 -- accel/accel.sh@19 -- # IFS=: 00:33:34.026 09:02:35 -- accel/accel.sh@19 -- # read -r var val 00:33:34.026 09:02:35 -- accel/accel.sh@20 -- # val= 00:33:34.026 09:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.026 09:02:35 -- accel/accel.sh@19 -- # IFS=: 00:33:34.026 09:02:35 -- accel/accel.sh@19 -- # read -r var val 00:33:34.026 09:02:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:33:34.026 09:02:35 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:33:34.026 09:02:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:34.026 00:33:34.026 real 0m2.925s 00:33:34.026 user 0m2.630s 00:33:34.026 sys 0m0.187s 00:33:34.026 09:02:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:34.026 09:02:35 -- common/autotest_common.sh@10 -- # set +x 00:33:34.026 ************************************ 00:33:34.026 END TEST accel_dif_generate_copy 00:33:34.026 ************************************ 00:33:34.026 09:02:35 -- accel/accel.sh@115 -- # [[ y == y ]] 00:33:34.026 09:02:35 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:33:34.026 09:02:35 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:33:34.026 09:02:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:34.026 09:02:35 -- common/autotest_common.sh@10 -- # set +x 00:33:34.026 ************************************ 00:33:34.026 START TEST accel_comp 00:33:34.026 ************************************ 00:33:34.026 09:02:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:33:34.026 09:02:35 -- accel/accel.sh@16 -- # local accel_opc 00:33:34.026 09:02:35 -- accel/accel.sh@17 -- # local accel_module 00:33:34.026 09:02:35 -- accel/accel.sh@19 -- # IFS=: 00:33:34.026 09:02:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:33:34.026 09:02:35 -- accel/accel.sh@19 -- # read -r var val 00:33:34.026 09:02:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:33:34.026 09:02:35 -- accel/accel.sh@12 -- # build_accel_config 00:33:34.026 09:02:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:33:34.026 09:02:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:33:34.026 09:02:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:34.026 09:02:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:34.026 09:02:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:33:34.026 09:02:35 -- accel/accel.sh@40 -- # local IFS=, 00:33:34.026 09:02:35 -- accel/accel.sh@41 -- # jq -r . 00:33:34.026 [2024-04-18 09:02:35.796891] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:33:34.026 [2024-04-18 09:02:35.797295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65647 ] 00:33:34.026 [2024-04-18 09:02:35.967990] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.285 [2024-04-18 09:02:36.271284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.543 09:02:36 -- accel/accel.sh@20 -- # val= 00:33:34.543 09:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # IFS=: 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # read -r var val 00:33:34.543 09:02:36 -- accel/accel.sh@20 -- # val= 00:33:34.543 09:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # IFS=: 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # read -r var val 00:33:34.543 09:02:36 -- accel/accel.sh@20 -- # val= 00:33:34.543 09:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # IFS=: 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # read -r var val 00:33:34.543 09:02:36 -- accel/accel.sh@20 -- # val=0x1 00:33:34.543 09:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # IFS=: 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # read -r var val 00:33:34.543 09:02:36 -- accel/accel.sh@20 -- # val= 00:33:34.543 09:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # IFS=: 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # read -r var val 00:33:34.543 09:02:36 -- accel/accel.sh@20 -- # val= 00:33:34.543 09:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # IFS=: 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # read -r var val 00:33:34.543 09:02:36 -- accel/accel.sh@20 -- # val=compress 00:33:34.543 09:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.543 09:02:36 -- accel/accel.sh@23 -- # accel_opc=compress 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # IFS=: 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # read -r var val 00:33:34.543 09:02:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:33:34.543 09:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # IFS=: 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # read -r var val 00:33:34.543 09:02:36 -- accel/accel.sh@20 -- # val= 00:33:34.543 09:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # IFS=: 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # read -r var val 00:33:34.543 09:02:36 -- accel/accel.sh@20 -- # val=software 00:33:34.543 09:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.543 09:02:36 -- accel/accel.sh@22 -- # accel_module=software 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # IFS=: 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # read -r var val 00:33:34.543 09:02:36 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:33:34.543 09:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.543 09:02:36 -- accel/accel.sh@19 -- # IFS=: 00:33:34.544 09:02:36 -- accel/accel.sh@19 -- # read -r var val 00:33:34.544 09:02:36 -- accel/accel.sh@20 -- # val=32 00:33:34.544 09:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.544 09:02:36 -- accel/accel.sh@19 -- # IFS=: 00:33:34.544 09:02:36 -- accel/accel.sh@19 -- # read -r var val 00:33:34.544 09:02:36 -- accel/accel.sh@20 -- # val=32 00:33:34.544 09:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.544 09:02:36 -- accel/accel.sh@19 -- # IFS=: 00:33:34.544 09:02:36 -- accel/accel.sh@19 -- # read -r var val 00:33:34.544 09:02:36 -- accel/accel.sh@20 -- # val=1 00:33:34.544 09:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.544 09:02:36 -- accel/accel.sh@19 -- # IFS=: 00:33:34.544 09:02:36 -- accel/accel.sh@19 -- # read -r var val 00:33:34.544 09:02:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:33:34.544 09:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.544 09:02:36 -- accel/accel.sh@19 -- # IFS=: 00:33:34.544 09:02:36 -- accel/accel.sh@19 -- # read -r var val 00:33:34.544 09:02:36 -- accel/accel.sh@20 -- # val=No 00:33:34.544 09:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.544 09:02:36 -- accel/accel.sh@19 -- # IFS=: 00:33:34.544 09:02:36 -- accel/accel.sh@19 -- # read -r var val 00:33:34.544 09:02:36 -- accel/accel.sh@20 -- # val= 00:33:34.544 09:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.544 09:02:36 -- accel/accel.sh@19 -- # IFS=: 00:33:34.544 09:02:36 -- accel/accel.sh@19 -- # read -r var val 00:33:34.544 09:02:36 -- accel/accel.sh@20 -- # val= 00:33:34.544 09:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:33:34.544 09:02:36 -- accel/accel.sh@19 -- # IFS=: 00:33:34.544 09:02:36 -- accel/accel.sh@19 -- # read -r var val 00:33:37.073 09:02:38 -- accel/accel.sh@20 -- # val= 00:33:37.073 09:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.073 09:02:38 -- accel/accel.sh@19 -- # IFS=: 00:33:37.073 09:02:38 -- accel/accel.sh@19 -- # read -r var val 00:33:37.073 09:02:38 -- accel/accel.sh@20 -- # val= 00:33:37.073 09:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.073 09:02:38 -- accel/accel.sh@19 -- # IFS=: 00:33:37.073 09:02:38 -- accel/accel.sh@19 -- # read -r var val 00:33:37.073 09:02:38 -- accel/accel.sh@20 -- # val= 00:33:37.073 09:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.073 09:02:38 -- accel/accel.sh@19 -- # IFS=: 00:33:37.073 09:02:38 -- accel/accel.sh@19 -- # read -r var val 00:33:37.073 09:02:38 -- accel/accel.sh@20 -- # val= 00:33:37.074 09:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.074 09:02:38 -- accel/accel.sh@19 -- # IFS=: 00:33:37.074 09:02:38 -- accel/accel.sh@19 -- # read -r var val 00:33:37.074 09:02:38 -- accel/accel.sh@20 -- # val= 00:33:37.074 09:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.074 09:02:38 -- accel/accel.sh@19 -- # IFS=: 00:33:37.074 09:02:38 -- accel/accel.sh@19 -- # read -r var val 00:33:37.074 09:02:38 -- accel/accel.sh@20 -- # val= 00:33:37.074 09:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.074 09:02:38 -- accel/accel.sh@19 -- # IFS=: 00:33:37.074 09:02:38 -- accel/accel.sh@19 -- # read -r var val 00:33:37.074 09:02:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:33:37.074 09:02:38 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:33:37.074 09:02:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:37.074 00:33:37.074 real 0m3.059s 00:33:37.074 user 0m2.690s 00:33:37.074 sys 0m0.264s 00:33:37.074 09:02:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:37.074 09:02:38 -- common/autotest_common.sh@10 -- # set +x 00:33:37.074 ************************************ 00:33:37.074 END TEST accel_comp 00:33:37.074 ************************************ 00:33:37.074 09:02:38 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:33:37.074 09:02:38 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:33:37.074 09:02:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:37.074 09:02:38 -- common/autotest_common.sh@10 -- # set +x 00:33:37.074 ************************************ 00:33:37.074 START TEST accel_decomp 00:33:37.074 ************************************ 00:33:37.074 09:02:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:33:37.074 09:02:38 -- accel/accel.sh@16 -- # local accel_opc 00:33:37.074 09:02:38 -- accel/accel.sh@17 -- # local accel_module 00:33:37.074 09:02:38 -- accel/accel.sh@19 -- # IFS=: 00:33:37.074 09:02:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:33:37.074 09:02:38 -- accel/accel.sh@19 -- # read -r var val 00:33:37.074 09:02:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:33:37.074 09:02:38 -- accel/accel.sh@12 -- # build_accel_config 00:33:37.074 09:02:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:33:37.074 09:02:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:33:37.074 09:02:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:37.074 09:02:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:37.074 09:02:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:33:37.074 09:02:38 -- accel/accel.sh@40 -- # local IFS=, 00:33:37.074 09:02:38 -- accel/accel.sh@41 -- # jq -r . 00:33:37.074 [2024-04-18 09:02:38.993478] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:33:37.074 [2024-04-18 09:02:38.993847] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65705 ] 00:33:37.074 [2024-04-18 09:02:39.170646] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.641 [2024-04-18 09:02:39.464851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:37.899 09:02:39 -- accel/accel.sh@20 -- # val= 00:33:37.899 09:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.899 09:02:39 -- accel/accel.sh@19 -- # IFS=: 00:33:37.899 09:02:39 -- accel/accel.sh@19 -- # read -r var val 00:33:37.899 09:02:39 -- accel/accel.sh@20 -- # val= 00:33:37.899 09:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.899 09:02:39 -- accel/accel.sh@19 -- # IFS=: 00:33:37.899 09:02:39 -- accel/accel.sh@19 -- # read -r var val 00:33:37.899 09:02:39 -- accel/accel.sh@20 -- # val= 00:33:37.899 09:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.899 09:02:39 -- accel/accel.sh@19 -- # IFS=: 00:33:37.899 09:02:39 -- accel/accel.sh@19 -- # read -r var val 00:33:37.899 09:02:39 -- accel/accel.sh@20 -- # val=0x1 00:33:37.899 09:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.899 09:02:39 -- accel/accel.sh@19 -- # IFS=: 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # read -r var val 00:33:37.900 09:02:39 -- accel/accel.sh@20 -- # val= 00:33:37.900 09:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # IFS=: 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # read -r var val 00:33:37.900 09:02:39 -- accel/accel.sh@20 -- # val= 00:33:37.900 09:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # IFS=: 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # read -r var val 00:33:37.900 09:02:39 -- accel/accel.sh@20 -- # val=decompress 00:33:37.900 09:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.900 09:02:39 -- accel/accel.sh@23 -- # accel_opc=decompress 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # IFS=: 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # read -r var val 00:33:37.900 09:02:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:33:37.900 09:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # IFS=: 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # read -r var val 00:33:37.900 09:02:39 -- accel/accel.sh@20 -- # val= 00:33:37.900 09:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # IFS=: 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # read -r var val 00:33:37.900 09:02:39 -- accel/accel.sh@20 -- # val=software 00:33:37.900 09:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.900 09:02:39 -- accel/accel.sh@22 -- # accel_module=software 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # IFS=: 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # read -r var val 00:33:37.900 09:02:39 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:33:37.900 09:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # IFS=: 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # read -r var val 00:33:37.900 09:02:39 -- accel/accel.sh@20 -- # val=32 00:33:37.900 09:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # IFS=: 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # read -r var val 00:33:37.900 09:02:39 -- accel/accel.sh@20 -- # val=32 00:33:37.900 09:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # IFS=: 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # read -r var val 00:33:37.900 09:02:39 -- accel/accel.sh@20 -- # val=1 00:33:37.900 09:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # IFS=: 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # read -r var val 00:33:37.900 09:02:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:33:37.900 09:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # IFS=: 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # read -r var val 00:33:37.900 09:02:39 -- accel/accel.sh@20 -- # val=Yes 00:33:37.900 09:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # IFS=: 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # read -r var val 00:33:37.900 09:02:39 -- accel/accel.sh@20 -- # val= 00:33:37.900 09:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # IFS=: 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # read -r var val 00:33:37.900 09:02:39 -- accel/accel.sh@20 -- # val= 00:33:37.900 09:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # IFS=: 00:33:37.900 09:02:39 -- accel/accel.sh@19 -- # read -r var val 00:33:39.858 09:02:41 -- accel/accel.sh@20 -- # val= 00:33:39.858 09:02:41 -- accel/accel.sh@21 -- # case "$var" in 00:33:39.858 09:02:41 -- accel/accel.sh@19 -- # IFS=: 00:33:39.858 09:02:41 -- accel/accel.sh@19 -- # read -r var val 00:33:39.858 09:02:41 -- accel/accel.sh@20 -- # val= 00:33:39.858 09:02:41 -- accel/accel.sh@21 -- # case "$var" in 00:33:39.858 09:02:41 -- accel/accel.sh@19 -- # IFS=: 00:33:39.858 09:02:41 -- accel/accel.sh@19 -- # read -r var val 00:33:39.858 09:02:41 -- accel/accel.sh@20 -- # val= 00:33:39.858 09:02:41 -- accel/accel.sh@21 -- # case "$var" in 00:33:39.858 09:02:41 -- accel/accel.sh@19 -- # IFS=: 00:33:39.858 09:02:41 -- accel/accel.sh@19 -- # read -r var val 00:33:39.858 09:02:41 -- accel/accel.sh@20 -- # val= 00:33:39.858 09:02:41 -- accel/accel.sh@21 -- # case "$var" in 00:33:39.858 09:02:41 -- accel/accel.sh@19 -- # IFS=: 00:33:39.858 09:02:41 -- accel/accel.sh@19 -- # read -r var val 00:33:39.858 09:02:41 -- accel/accel.sh@20 -- # val= 00:33:39.858 09:02:41 -- accel/accel.sh@21 -- # case "$var" in 00:33:39.858 09:02:41 -- accel/accel.sh@19 -- # IFS=: 00:33:39.858 09:02:41 -- accel/accel.sh@19 -- # read -r var val 00:33:39.858 09:02:41 -- accel/accel.sh@20 -- # val= 00:33:39.858 09:02:41 -- accel/accel.sh@21 -- # case "$var" in 00:33:39.858 09:02:41 -- accel/accel.sh@19 -- # IFS=: 00:33:39.858 09:02:41 -- accel/accel.sh@19 -- # read -r var val 00:33:40.119 09:02:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:33:40.119 ************************************ 00:33:40.119 END TEST accel_decomp 00:33:40.119 ************************************ 00:33:40.119 09:02:41 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:33:40.119 09:02:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:40.119 00:33:40.119 real 0m3.033s 00:33:40.119 user 0m2.704s 00:33:40.119 sys 0m0.222s 00:33:40.119 09:02:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:40.119 09:02:41 -- common/autotest_common.sh@10 -- # set +x 00:33:40.119 09:02:42 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:33:40.119 09:02:42 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:33:40.119 09:02:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:40.119 09:02:42 -- common/autotest_common.sh@10 -- # set +x 00:33:40.119 ************************************ 00:33:40.119 START TEST accel_decmop_full 00:33:40.119 ************************************ 00:33:40.119 09:02:42 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:33:40.119 09:02:42 -- accel/accel.sh@16 -- # local accel_opc 00:33:40.119 09:02:42 -- accel/accel.sh@17 -- # local accel_module 00:33:40.119 09:02:42 -- accel/accel.sh@19 -- # IFS=: 00:33:40.119 09:02:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:33:40.119 09:02:42 -- accel/accel.sh@19 -- # read -r var val 00:33:40.119 09:02:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:33:40.119 09:02:42 -- accel/accel.sh@12 -- # build_accel_config 00:33:40.119 09:02:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:33:40.119 09:02:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:33:40.119 09:02:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:40.119 09:02:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:40.119 09:02:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:33:40.119 09:02:42 -- accel/accel.sh@40 -- # local IFS=, 00:33:40.119 09:02:42 -- accel/accel.sh@41 -- # jq -r . 00:33:40.119 [2024-04-18 09:02:42.165674] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:33:40.119 [2024-04-18 09:02:42.166008] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65761 ] 00:33:40.379 [2024-04-18 09:02:42.337617] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.639 [2024-04-18 09:02:42.612935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.899 09:02:42 -- accel/accel.sh@20 -- # val= 00:33:40.899 09:02:42 -- accel/accel.sh@21 -- # case "$var" in 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # IFS=: 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # read -r var val 00:33:40.899 09:02:42 -- accel/accel.sh@20 -- # val= 00:33:40.899 09:02:42 -- accel/accel.sh@21 -- # case "$var" in 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # IFS=: 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # read -r var val 00:33:40.899 09:02:42 -- accel/accel.sh@20 -- # val= 00:33:40.899 09:02:42 -- accel/accel.sh@21 -- # case "$var" in 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # IFS=: 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # read -r var val 00:33:40.899 09:02:42 -- accel/accel.sh@20 -- # val=0x1 00:33:40.899 09:02:42 -- accel/accel.sh@21 -- # case "$var" in 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # IFS=: 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # read -r var val 00:33:40.899 09:02:42 -- accel/accel.sh@20 -- # val= 00:33:40.899 09:02:42 -- accel/accel.sh@21 -- # case "$var" in 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # IFS=: 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # read -r var val 00:33:40.899 09:02:42 -- accel/accel.sh@20 -- # val= 00:33:40.899 09:02:42 -- accel/accel.sh@21 -- # case "$var" in 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # IFS=: 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # read -r var val 00:33:40.899 09:02:42 -- accel/accel.sh@20 -- # val=decompress 00:33:40.899 09:02:42 -- accel/accel.sh@21 -- # case "$var" in 00:33:40.899 09:02:42 -- accel/accel.sh@23 -- # accel_opc=decompress 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # IFS=: 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # read -r var val 00:33:40.899 09:02:42 -- accel/accel.sh@20 -- # val='111250 bytes' 00:33:40.899 09:02:42 -- accel/accel.sh@21 -- # case "$var" in 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # IFS=: 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # read -r var val 00:33:40.899 09:02:42 -- accel/accel.sh@20 -- # val= 00:33:40.899 09:02:42 -- accel/accel.sh@21 -- # case "$var" in 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # IFS=: 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # read -r var val 00:33:40.899 09:02:42 -- accel/accel.sh@20 -- # val=software 00:33:40.899 09:02:42 -- accel/accel.sh@21 -- # case "$var" in 00:33:40.899 09:02:42 -- accel/accel.sh@22 -- # accel_module=software 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # IFS=: 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # read -r var val 00:33:40.899 09:02:42 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:33:40.899 09:02:42 -- accel/accel.sh@21 -- # case "$var" in 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # IFS=: 00:33:40.899 09:02:42 -- accel/accel.sh@19 -- # read -r var val 00:33:40.899 09:02:42 -- accel/accel.sh@20 -- # val=32 00:33:40.900 09:02:42 -- accel/accel.sh@21 -- # case "$var" in 00:33:40.900 09:02:42 -- accel/accel.sh@19 -- # IFS=: 00:33:40.900 09:02:42 -- accel/accel.sh@19 -- # read -r var val 00:33:40.900 09:02:42 -- accel/accel.sh@20 -- # val=32 00:33:40.900 09:02:42 -- accel/accel.sh@21 -- # case "$var" in 00:33:40.900 09:02:42 -- accel/accel.sh@19 -- # IFS=: 00:33:40.900 09:02:42 -- accel/accel.sh@19 -- # read -r var val 00:33:40.900 09:02:42 -- accel/accel.sh@20 -- # val=1 00:33:40.900 09:02:42 -- accel/accel.sh@21 -- # case "$var" in 00:33:40.900 09:02:42 -- accel/accel.sh@19 -- # IFS=: 00:33:40.900 09:02:42 -- accel/accel.sh@19 -- # read -r var val 00:33:40.900 09:02:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:33:40.900 09:02:42 -- accel/accel.sh@21 -- # case "$var" in 00:33:40.900 09:02:42 -- accel/accel.sh@19 -- # IFS=: 00:33:40.900 09:02:42 -- accel/accel.sh@19 -- # read -r var val 00:33:40.900 09:02:42 -- accel/accel.sh@20 -- # val=Yes 00:33:40.900 09:02:42 -- accel/accel.sh@21 -- # case "$var" in 00:33:40.900 09:02:42 -- accel/accel.sh@19 -- # IFS=: 00:33:40.900 09:02:42 -- accel/accel.sh@19 -- # read -r var val 00:33:40.900 09:02:42 -- accel/accel.sh@20 -- # val= 00:33:40.900 09:02:42 -- accel/accel.sh@21 -- # case "$var" in 00:33:40.900 09:02:42 -- accel/accel.sh@19 -- # IFS=: 00:33:40.900 09:02:42 -- accel/accel.sh@19 -- # read -r var val 00:33:40.900 09:02:42 -- accel/accel.sh@20 -- # val= 00:33:40.900 09:02:42 -- accel/accel.sh@21 -- # case "$var" in 00:33:40.900 09:02:42 -- accel/accel.sh@19 -- # IFS=: 00:33:40.900 09:02:42 -- accel/accel.sh@19 -- # read -r var val 00:33:43.451 09:02:45 -- accel/accel.sh@20 -- # val= 00:33:43.451 09:02:45 -- accel/accel.sh@21 -- # case "$var" in 00:33:43.451 09:02:45 -- accel/accel.sh@19 -- # IFS=: 00:33:43.451 09:02:45 -- accel/accel.sh@19 -- # read -r var val 00:33:43.451 09:02:45 -- accel/accel.sh@20 -- # val= 00:33:43.451 09:02:45 -- accel/accel.sh@21 -- # case "$var" in 00:33:43.451 09:02:45 -- accel/accel.sh@19 -- # IFS=: 00:33:43.451 09:02:45 -- accel/accel.sh@19 -- # read -r var val 00:33:43.451 09:02:45 -- accel/accel.sh@20 -- # val= 00:33:43.451 09:02:45 -- accel/accel.sh@21 -- # case "$var" in 00:33:43.451 09:02:45 -- accel/accel.sh@19 -- # IFS=: 00:33:43.451 09:02:45 -- accel/accel.sh@19 -- # read -r var val 00:33:43.451 09:02:45 -- accel/accel.sh@20 -- # val= 00:33:43.451 09:02:45 -- accel/accel.sh@21 -- # case "$var" in 00:33:43.451 09:02:45 -- accel/accel.sh@19 -- # IFS=: 00:33:43.451 09:02:45 -- accel/accel.sh@19 -- # read -r var val 00:33:43.451 09:02:45 -- accel/accel.sh@20 -- # val= 00:33:43.451 09:02:45 -- accel/accel.sh@21 -- # case "$var" in 00:33:43.451 09:02:45 -- accel/accel.sh@19 -- # IFS=: 00:33:43.451 09:02:45 -- accel/accel.sh@19 -- # read -r var val 00:33:43.451 09:02:45 -- accel/accel.sh@20 -- # val= 00:33:43.451 09:02:45 -- accel/accel.sh@21 -- # case "$var" in 00:33:43.451 09:02:45 -- accel/accel.sh@19 -- # IFS=: 00:33:43.451 09:02:45 -- accel/accel.sh@19 -- # read -r var val 00:33:43.451 09:02:45 -- accel/accel.sh@27 -- # [[ -n software ]] 00:33:43.451 09:02:45 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:33:43.451 09:02:45 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:43.451 00:33:43.451 real 0m2.988s 00:33:43.451 user 0m2.686s 00:33:43.451 sys 0m0.196s 00:33:43.451 09:02:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:43.451 09:02:45 -- common/autotest_common.sh@10 -- # set +x 00:33:43.451 ************************************ 00:33:43.451 END TEST accel_decmop_full 00:33:43.451 ************************************ 00:33:43.451 09:02:45 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:33:43.451 09:02:45 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:33:43.451 09:02:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:43.451 09:02:45 -- common/autotest_common.sh@10 -- # set +x 00:33:43.451 ************************************ 00:33:43.451 START TEST accel_decomp_mcore 00:33:43.451 ************************************ 00:33:43.451 09:02:45 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:33:43.451 09:02:45 -- accel/accel.sh@16 -- # local accel_opc 00:33:43.451 09:02:45 -- accel/accel.sh@17 -- # local accel_module 00:33:43.451 09:02:45 -- accel/accel.sh@19 -- # IFS=: 00:33:43.451 09:02:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:33:43.451 09:02:45 -- accel/accel.sh@19 -- # read -r var val 00:33:43.451 09:02:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:33:43.451 09:02:45 -- accel/accel.sh@12 -- # build_accel_config 00:33:43.451 09:02:45 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:33:43.451 09:02:45 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:33:43.451 09:02:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:43.451 09:02:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:43.451 09:02:45 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:33:43.451 09:02:45 -- accel/accel.sh@40 -- # local IFS=, 00:33:43.451 09:02:45 -- accel/accel.sh@41 -- # jq -r . 00:33:43.451 [2024-04-18 09:02:45.300066] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:33:43.451 [2024-04-18 09:02:45.300455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65817 ] 00:33:43.451 [2024-04-18 09:02:45.470581] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:43.710 [2024-04-18 09:02:45.748700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.710 [2024-04-18 09:02:45.748795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:43.710 [2024-04-18 09:02:45.748895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:43.710 [2024-04-18 09:02:45.748915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:44.276 09:02:46 -- accel/accel.sh@20 -- # val= 00:33:44.276 09:02:46 -- accel/accel.sh@21 -- # case "$var" in 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # IFS=: 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # read -r var val 00:33:44.276 09:02:46 -- accel/accel.sh@20 -- # val= 00:33:44.276 09:02:46 -- accel/accel.sh@21 -- # case "$var" in 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # IFS=: 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # read -r var val 00:33:44.276 09:02:46 -- accel/accel.sh@20 -- # val= 00:33:44.276 09:02:46 -- accel/accel.sh@21 -- # case "$var" in 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # IFS=: 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # read -r var val 00:33:44.276 09:02:46 -- accel/accel.sh@20 -- # val=0xf 00:33:44.276 09:02:46 -- accel/accel.sh@21 -- # case "$var" in 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # IFS=: 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # read -r var val 00:33:44.276 09:02:46 -- accel/accel.sh@20 -- # val= 00:33:44.276 09:02:46 -- accel/accel.sh@21 -- # case "$var" in 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # IFS=: 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # read -r var val 00:33:44.276 09:02:46 -- accel/accel.sh@20 -- # val= 00:33:44.276 09:02:46 -- accel/accel.sh@21 -- # case "$var" in 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # IFS=: 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # read -r var val 00:33:44.276 09:02:46 -- accel/accel.sh@20 -- # val=decompress 00:33:44.276 09:02:46 -- accel/accel.sh@21 -- # case "$var" in 00:33:44.276 09:02:46 -- accel/accel.sh@23 -- # accel_opc=decompress 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # IFS=: 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # read -r var val 00:33:44.276 09:02:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:33:44.276 09:02:46 -- accel/accel.sh@21 -- # case "$var" in 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # IFS=: 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # read -r var val 00:33:44.276 09:02:46 -- accel/accel.sh@20 -- # val= 00:33:44.276 09:02:46 -- accel/accel.sh@21 -- # case "$var" in 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # IFS=: 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # read -r var val 00:33:44.276 09:02:46 -- accel/accel.sh@20 -- # val=software 00:33:44.276 09:02:46 -- accel/accel.sh@21 -- # case "$var" in 00:33:44.276 09:02:46 -- accel/accel.sh@22 -- # accel_module=software 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # IFS=: 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # read -r var val 00:33:44.276 09:02:46 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:33:44.276 09:02:46 -- accel/accel.sh@21 -- # case "$var" in 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # IFS=: 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # read -r var val 00:33:44.276 09:02:46 -- accel/accel.sh@20 -- # val=32 00:33:44.276 09:02:46 -- accel/accel.sh@21 -- # case "$var" in 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # IFS=: 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # read -r var val 00:33:44.276 09:02:46 -- accel/accel.sh@20 -- # val=32 00:33:44.276 09:02:46 -- accel/accel.sh@21 -- # case "$var" in 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # IFS=: 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # read -r var val 00:33:44.276 09:02:46 -- accel/accel.sh@20 -- # val=1 00:33:44.276 09:02:46 -- accel/accel.sh@21 -- # case "$var" in 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # IFS=: 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # read -r var val 00:33:44.276 09:02:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:33:44.276 09:02:46 -- accel/accel.sh@21 -- # case "$var" in 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # IFS=: 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # read -r var val 00:33:44.276 09:02:46 -- accel/accel.sh@20 -- # val=Yes 00:33:44.276 09:02:46 -- accel/accel.sh@21 -- # case "$var" in 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # IFS=: 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # read -r var val 00:33:44.276 09:02:46 -- accel/accel.sh@20 -- # val= 00:33:44.276 09:02:46 -- accel/accel.sh@21 -- # case "$var" in 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # IFS=: 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # read -r var val 00:33:44.276 09:02:46 -- accel/accel.sh@20 -- # val= 00:33:44.276 09:02:46 -- accel/accel.sh@21 -- # case "$var" in 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # IFS=: 00:33:44.276 09:02:46 -- accel/accel.sh@19 -- # read -r var val 00:33:46.178 09:02:48 -- accel/accel.sh@20 -- # val= 00:33:46.178 09:02:48 -- accel/accel.sh@21 -- # case "$var" in 00:33:46.178 09:02:48 -- accel/accel.sh@19 -- # IFS=: 00:33:46.178 09:02:48 -- accel/accel.sh@19 -- # read -r var val 00:33:46.178 09:02:48 -- accel/accel.sh@20 -- # val= 00:33:46.178 09:02:48 -- accel/accel.sh@21 -- # case "$var" in 00:33:46.178 09:02:48 -- accel/accel.sh@19 -- # IFS=: 00:33:46.178 09:02:48 -- accel/accel.sh@19 -- # read -r var val 00:33:46.178 09:02:48 -- accel/accel.sh@20 -- # val= 00:33:46.178 09:02:48 -- accel/accel.sh@21 -- # case "$var" in 00:33:46.178 09:02:48 -- accel/accel.sh@19 -- # IFS=: 00:33:46.178 09:02:48 -- accel/accel.sh@19 -- # read -r var val 00:33:46.178 09:02:48 -- accel/accel.sh@20 -- # val= 00:33:46.178 09:02:48 -- accel/accel.sh@21 -- # case "$var" in 00:33:46.178 09:02:48 -- accel/accel.sh@19 -- # IFS=: 00:33:46.178 09:02:48 -- accel/accel.sh@19 -- # read -r var val 00:33:46.178 09:02:48 -- accel/accel.sh@20 -- # val= 00:33:46.178 09:02:48 -- accel/accel.sh@21 -- # case "$var" in 00:33:46.178 09:02:48 -- accel/accel.sh@19 -- # IFS=: 00:33:46.178 09:02:48 -- accel/accel.sh@19 -- # read -r var val 00:33:46.178 09:02:48 -- accel/accel.sh@20 -- # val= 00:33:46.178 09:02:48 -- accel/accel.sh@21 -- # case "$var" in 00:33:46.178 09:02:48 -- accel/accel.sh@19 -- # IFS=: 00:33:46.178 09:02:48 -- accel/accel.sh@19 -- # read -r var val 00:33:46.178 09:02:48 -- accel/accel.sh@20 -- # val= 00:33:46.178 09:02:48 -- accel/accel.sh@21 -- # case "$var" in 00:33:46.178 09:02:48 -- accel/accel.sh@19 -- # IFS=: 00:33:46.178 09:02:48 -- accel/accel.sh@19 -- # read -r var val 00:33:46.178 09:02:48 -- accel/accel.sh@20 -- # val= 00:33:46.178 09:02:48 -- accel/accel.sh@21 -- # case "$var" in 00:33:46.178 09:02:48 -- accel/accel.sh@19 -- # IFS=: 00:33:46.178 09:02:48 -- accel/accel.sh@19 -- # read -r var val 00:33:46.178 09:02:48 -- accel/accel.sh@20 -- # val= 00:33:46.178 09:02:48 -- accel/accel.sh@21 -- # case "$var" in 00:33:46.178 09:02:48 -- accel/accel.sh@19 -- # IFS=: 00:33:46.178 09:02:48 -- accel/accel.sh@19 -- # read -r var val 00:33:46.178 09:02:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:33:46.178 09:02:48 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:33:46.178 09:02:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:46.178 00:33:46.178 real 0m3.000s 00:33:46.178 user 0m8.609s 00:33:46.178 sys 0m0.238s 00:33:46.178 09:02:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:46.178 09:02:48 -- common/autotest_common.sh@10 -- # set +x 00:33:46.178 ************************************ 00:33:46.178 END TEST accel_decomp_mcore 00:33:46.178 ************************************ 00:33:46.436 09:02:48 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:33:46.436 09:02:48 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:33:46.436 09:02:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:46.436 09:02:48 -- common/autotest_common.sh@10 -- # set +x 00:33:46.436 ************************************ 00:33:46.436 START TEST accel_decomp_full_mcore 00:33:46.436 ************************************ 00:33:46.436 09:02:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:33:46.436 09:02:48 -- accel/accel.sh@16 -- # local accel_opc 00:33:46.436 09:02:48 -- accel/accel.sh@17 -- # local accel_module 00:33:46.436 09:02:48 -- accel/accel.sh@19 -- # IFS=: 00:33:46.436 09:02:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:33:46.436 09:02:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:33:46.436 09:02:48 -- accel/accel.sh@19 -- # read -r var val 00:33:46.436 09:02:48 -- accel/accel.sh@12 -- # build_accel_config 00:33:46.436 09:02:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:33:46.436 09:02:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:33:46.436 09:02:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:46.436 09:02:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:46.436 09:02:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:33:46.436 09:02:48 -- accel/accel.sh@40 -- # local IFS=, 00:33:46.436 09:02:48 -- accel/accel.sh@41 -- # jq -r . 00:33:46.436 [2024-04-18 09:02:48.431229] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:33:46.436 [2024-04-18 09:02:48.431564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65881 ] 00:33:46.693 [2024-04-18 09:02:48.601259] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:46.950 [2024-04-18 09:02:48.853450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.950 [2024-04-18 09:02:48.853596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:46.950 [2024-04-18 09:02:48.853731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.950 [2024-04-18 09:02:48.853760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:47.208 09:02:49 -- accel/accel.sh@20 -- # val= 00:33:47.208 09:02:49 -- accel/accel.sh@21 -- # case "$var" in 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # IFS=: 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # read -r var val 00:33:47.208 09:02:49 -- accel/accel.sh@20 -- # val= 00:33:47.208 09:02:49 -- accel/accel.sh@21 -- # case "$var" in 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # IFS=: 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # read -r var val 00:33:47.208 09:02:49 -- accel/accel.sh@20 -- # val= 00:33:47.208 09:02:49 -- accel/accel.sh@21 -- # case "$var" in 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # IFS=: 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # read -r var val 00:33:47.208 09:02:49 -- accel/accel.sh@20 -- # val=0xf 00:33:47.208 09:02:49 -- accel/accel.sh@21 -- # case "$var" in 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # IFS=: 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # read -r var val 00:33:47.208 09:02:49 -- accel/accel.sh@20 -- # val= 00:33:47.208 09:02:49 -- accel/accel.sh@21 -- # case "$var" in 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # IFS=: 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # read -r var val 00:33:47.208 09:02:49 -- accel/accel.sh@20 -- # val= 00:33:47.208 09:02:49 -- accel/accel.sh@21 -- # case "$var" in 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # IFS=: 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # read -r var val 00:33:47.208 09:02:49 -- accel/accel.sh@20 -- # val=decompress 00:33:47.208 09:02:49 -- accel/accel.sh@21 -- # case "$var" in 00:33:47.208 09:02:49 -- accel/accel.sh@23 -- # accel_opc=decompress 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # IFS=: 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # read -r var val 00:33:47.208 09:02:49 -- accel/accel.sh@20 -- # val='111250 bytes' 00:33:47.208 09:02:49 -- accel/accel.sh@21 -- # case "$var" in 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # IFS=: 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # read -r var val 00:33:47.208 09:02:49 -- accel/accel.sh@20 -- # val= 00:33:47.208 09:02:49 -- accel/accel.sh@21 -- # case "$var" in 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # IFS=: 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # read -r var val 00:33:47.208 09:02:49 -- accel/accel.sh@20 -- # val=software 00:33:47.208 09:02:49 -- accel/accel.sh@21 -- # case "$var" in 00:33:47.208 09:02:49 -- accel/accel.sh@22 -- # accel_module=software 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # IFS=: 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # read -r var val 00:33:47.208 09:02:49 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:33:47.208 09:02:49 -- accel/accel.sh@21 -- # case "$var" in 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # IFS=: 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # read -r var val 00:33:47.208 09:02:49 -- accel/accel.sh@20 -- # val=32 00:33:47.208 09:02:49 -- accel/accel.sh@21 -- # case "$var" in 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # IFS=: 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # read -r var val 00:33:47.208 09:02:49 -- accel/accel.sh@20 -- # val=32 00:33:47.208 09:02:49 -- accel/accel.sh@21 -- # case "$var" in 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # IFS=: 00:33:47.208 09:02:49 -- accel/accel.sh@19 -- # read -r var val 00:33:47.209 09:02:49 -- accel/accel.sh@20 -- # val=1 00:33:47.209 09:02:49 -- accel/accel.sh@21 -- # case "$var" in 00:33:47.209 09:02:49 -- accel/accel.sh@19 -- # IFS=: 00:33:47.209 09:02:49 -- accel/accel.sh@19 -- # read -r var val 00:33:47.209 09:02:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:33:47.209 09:02:49 -- accel/accel.sh@21 -- # case "$var" in 00:33:47.209 09:02:49 -- accel/accel.sh@19 -- # IFS=: 00:33:47.209 09:02:49 -- accel/accel.sh@19 -- # read -r var val 00:33:47.209 09:02:49 -- accel/accel.sh@20 -- # val=Yes 00:33:47.209 09:02:49 -- accel/accel.sh@21 -- # case "$var" in 00:33:47.209 09:02:49 -- accel/accel.sh@19 -- # IFS=: 00:33:47.209 09:02:49 -- accel/accel.sh@19 -- # read -r var val 00:33:47.209 09:02:49 -- accel/accel.sh@20 -- # val= 00:33:47.209 09:02:49 -- accel/accel.sh@21 -- # case "$var" in 00:33:47.209 09:02:49 -- accel/accel.sh@19 -- # IFS=: 00:33:47.209 09:02:49 -- accel/accel.sh@19 -- # read -r var val 00:33:47.209 09:02:49 -- accel/accel.sh@20 -- # val= 00:33:47.209 09:02:49 -- accel/accel.sh@21 -- # case "$var" in 00:33:47.209 09:02:49 -- accel/accel.sh@19 -- # IFS=: 00:33:47.209 09:02:49 -- accel/accel.sh@19 -- # read -r var val 00:33:49.800 09:02:51 -- accel/accel.sh@20 -- # val= 00:33:49.800 09:02:51 -- accel/accel.sh@21 -- # case "$var" in 00:33:49.800 09:02:51 -- accel/accel.sh@19 -- # IFS=: 00:33:49.801 09:02:51 -- accel/accel.sh@19 -- # read -r var val 00:33:49.801 09:02:51 -- accel/accel.sh@20 -- # val= 00:33:49.801 09:02:51 -- accel/accel.sh@21 -- # case "$var" in 00:33:49.801 09:02:51 -- accel/accel.sh@19 -- # IFS=: 00:33:49.801 09:02:51 -- accel/accel.sh@19 -- # read -r var val 00:33:49.801 09:02:51 -- accel/accel.sh@20 -- # val= 00:33:49.801 09:02:51 -- accel/accel.sh@21 -- # case "$var" in 00:33:49.801 09:02:51 -- accel/accel.sh@19 -- # IFS=: 00:33:49.801 09:02:51 -- accel/accel.sh@19 -- # read -r var val 00:33:49.801 09:02:51 -- accel/accel.sh@20 -- # val= 00:33:49.801 09:02:51 -- accel/accel.sh@21 -- # case "$var" in 00:33:49.801 09:02:51 -- accel/accel.sh@19 -- # IFS=: 00:33:49.801 09:02:51 -- accel/accel.sh@19 -- # read -r var val 00:33:49.801 09:02:51 -- accel/accel.sh@20 -- # val= 00:33:49.801 09:02:51 -- accel/accel.sh@21 -- # case "$var" in 00:33:49.801 09:02:51 -- accel/accel.sh@19 -- # IFS=: 00:33:49.801 09:02:51 -- accel/accel.sh@19 -- # read -r var val 00:33:49.801 09:02:51 -- accel/accel.sh@20 -- # val= 00:33:49.801 09:02:51 -- accel/accel.sh@21 -- # case "$var" in 00:33:49.801 09:02:51 -- accel/accel.sh@19 -- # IFS=: 00:33:49.801 09:02:51 -- accel/accel.sh@19 -- # read -r var val 00:33:49.801 09:02:51 -- accel/accel.sh@20 -- # val= 00:33:49.801 09:02:51 -- accel/accel.sh@21 -- # case "$var" in 00:33:49.801 09:02:51 -- accel/accel.sh@19 -- # IFS=: 00:33:49.801 09:02:51 -- accel/accel.sh@19 -- # read -r var val 00:33:49.801 09:02:51 -- accel/accel.sh@20 -- # val= 00:33:49.801 09:02:51 -- accel/accel.sh@21 -- # case "$var" in 00:33:49.801 09:02:51 -- accel/accel.sh@19 -- # IFS=: 00:33:49.801 09:02:51 -- accel/accel.sh@19 -- # read -r var val 00:33:49.801 09:02:51 -- accel/accel.sh@20 -- # val= 00:33:49.801 09:02:51 -- accel/accel.sh@21 -- # case "$var" in 00:33:49.801 09:02:51 -- accel/accel.sh@19 -- # IFS=: 00:33:49.801 09:02:51 -- accel/accel.sh@19 -- # read -r var val 00:33:49.801 09:02:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:33:49.801 09:02:51 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:33:49.801 09:02:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:49.801 00:33:49.801 real 0m2.956s 00:33:49.801 user 0m8.521s 00:33:49.801 sys 0m0.223s 00:33:49.801 09:02:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:49.801 09:02:51 -- common/autotest_common.sh@10 -- # set +x 00:33:49.801 ************************************ 00:33:49.801 END TEST accel_decomp_full_mcore 00:33:49.801 ************************************ 00:33:49.801 09:02:51 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:33:49.801 09:02:51 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:33:49.801 09:02:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:49.801 09:02:51 -- common/autotest_common.sh@10 -- # set +x 00:33:49.801 ************************************ 00:33:49.801 START TEST accel_decomp_mthread 00:33:49.801 ************************************ 00:33:49.801 09:02:51 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:33:49.801 09:02:51 -- accel/accel.sh@16 -- # local accel_opc 00:33:49.801 09:02:51 -- accel/accel.sh@17 -- # local accel_module 00:33:49.801 09:02:51 -- accel/accel.sh@19 -- # IFS=: 00:33:49.801 09:02:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:33:49.801 09:02:51 -- accel/accel.sh@19 -- # read -r var val 00:33:49.801 09:02:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:33:49.801 09:02:51 -- accel/accel.sh@12 -- # build_accel_config 00:33:49.801 09:02:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:33:49.801 09:02:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:33:49.801 09:02:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:49.801 09:02:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:49.801 09:02:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:33:49.801 09:02:51 -- accel/accel.sh@40 -- # local IFS=, 00:33:49.801 09:02:51 -- accel/accel.sh@41 -- # jq -r . 00:33:49.801 [2024-04-18 09:02:51.512136] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:33:49.801 [2024-04-18 09:02:51.512271] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65941 ] 00:33:49.801 [2024-04-18 09:02:51.681434] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.061 [2024-04-18 09:02:52.042702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.320 09:02:52 -- accel/accel.sh@20 -- # val= 00:33:50.320 09:02:52 -- accel/accel.sh@21 -- # case "$var" in 00:33:50.320 09:02:52 -- accel/accel.sh@19 -- # IFS=: 00:33:50.320 09:02:52 -- accel/accel.sh@19 -- # read -r var val 00:33:50.320 09:02:52 -- accel/accel.sh@20 -- # val= 00:33:50.320 09:02:52 -- accel/accel.sh@21 -- # case "$var" in 00:33:50.320 09:02:52 -- accel/accel.sh@19 -- # IFS=: 00:33:50.320 09:02:52 -- accel/accel.sh@19 -- # read -r var val 00:33:50.320 09:02:52 -- accel/accel.sh@20 -- # val= 00:33:50.320 09:02:52 -- accel/accel.sh@21 -- # case "$var" in 00:33:50.320 09:02:52 -- accel/accel.sh@19 -- # IFS=: 00:33:50.320 09:02:52 -- accel/accel.sh@19 -- # read -r var val 00:33:50.320 09:02:52 -- accel/accel.sh@20 -- # val=0x1 00:33:50.320 09:02:52 -- accel/accel.sh@21 -- # case "$var" in 00:33:50.320 09:02:52 -- accel/accel.sh@19 -- # IFS=: 00:33:50.320 09:02:52 -- accel/accel.sh@19 -- # read -r var val 00:33:50.320 09:02:52 -- accel/accel.sh@20 -- # val= 00:33:50.320 09:02:52 -- accel/accel.sh@21 -- # case "$var" in 00:33:50.320 09:02:52 -- accel/accel.sh@19 -- # IFS=: 00:33:50.320 09:02:52 -- accel/accel.sh@19 -- # read -r var val 00:33:50.320 09:02:52 -- accel/accel.sh@20 -- # val= 00:33:50.320 09:02:52 -- accel/accel.sh@21 -- # case "$var" in 00:33:50.320 09:02:52 -- accel/accel.sh@19 -- # IFS=: 00:33:50.320 09:02:52 -- accel/accel.sh@19 -- # read -r var val 00:33:50.320 09:02:52 -- accel/accel.sh@20 -- # val=decompress 00:33:50.320 09:02:52 -- accel/accel.sh@21 -- # case "$var" in 00:33:50.320 09:02:52 -- accel/accel.sh@23 -- # accel_opc=decompress 00:33:50.320 09:02:52 -- accel/accel.sh@19 -- # IFS=: 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # read -r var val 00:33:50.321 09:02:52 -- accel/accel.sh@20 -- # val='4096 bytes' 00:33:50.321 09:02:52 -- accel/accel.sh@21 -- # case "$var" in 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # IFS=: 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # read -r var val 00:33:50.321 09:02:52 -- accel/accel.sh@20 -- # val= 00:33:50.321 09:02:52 -- accel/accel.sh@21 -- # case "$var" in 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # IFS=: 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # read -r var val 00:33:50.321 09:02:52 -- accel/accel.sh@20 -- # val=software 00:33:50.321 09:02:52 -- accel/accel.sh@21 -- # case "$var" in 00:33:50.321 09:02:52 -- accel/accel.sh@22 -- # accel_module=software 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # IFS=: 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # read -r var val 00:33:50.321 09:02:52 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:33:50.321 09:02:52 -- accel/accel.sh@21 -- # case "$var" in 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # IFS=: 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # read -r var val 00:33:50.321 09:02:52 -- accel/accel.sh@20 -- # val=32 00:33:50.321 09:02:52 -- accel/accel.sh@21 -- # case "$var" in 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # IFS=: 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # read -r var val 00:33:50.321 09:02:52 -- accel/accel.sh@20 -- # val=32 00:33:50.321 09:02:52 -- accel/accel.sh@21 -- # case "$var" in 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # IFS=: 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # read -r var val 00:33:50.321 09:02:52 -- accel/accel.sh@20 -- # val=2 00:33:50.321 09:02:52 -- accel/accel.sh@21 -- # case "$var" in 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # IFS=: 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # read -r var val 00:33:50.321 09:02:52 -- accel/accel.sh@20 -- # val='1 seconds' 00:33:50.321 09:02:52 -- accel/accel.sh@21 -- # case "$var" in 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # IFS=: 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # read -r var val 00:33:50.321 09:02:52 -- accel/accel.sh@20 -- # val=Yes 00:33:50.321 09:02:52 -- accel/accel.sh@21 -- # case "$var" in 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # IFS=: 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # read -r var val 00:33:50.321 09:02:52 -- accel/accel.sh@20 -- # val= 00:33:50.321 09:02:52 -- accel/accel.sh@21 -- # case "$var" in 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # IFS=: 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # read -r var val 00:33:50.321 09:02:52 -- accel/accel.sh@20 -- # val= 00:33:50.321 09:02:52 -- accel/accel.sh@21 -- # case "$var" in 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # IFS=: 00:33:50.321 09:02:52 -- accel/accel.sh@19 -- # read -r var val 00:33:52.853 09:02:54 -- accel/accel.sh@20 -- # val= 00:33:52.853 09:02:54 -- accel/accel.sh@21 -- # case "$var" in 00:33:52.853 09:02:54 -- accel/accel.sh@19 -- # IFS=: 00:33:52.853 09:02:54 -- accel/accel.sh@19 -- # read -r var val 00:33:52.853 09:02:54 -- accel/accel.sh@20 -- # val= 00:33:52.853 09:02:54 -- accel/accel.sh@21 -- # case "$var" in 00:33:52.853 09:02:54 -- accel/accel.sh@19 -- # IFS=: 00:33:52.853 09:02:54 -- accel/accel.sh@19 -- # read -r var val 00:33:52.853 09:02:54 -- accel/accel.sh@20 -- # val= 00:33:52.853 09:02:54 -- accel/accel.sh@21 -- # case "$var" in 00:33:52.853 09:02:54 -- accel/accel.sh@19 -- # IFS=: 00:33:52.853 09:02:54 -- accel/accel.sh@19 -- # read -r var val 00:33:52.853 09:02:54 -- accel/accel.sh@20 -- # val= 00:33:52.853 09:02:54 -- accel/accel.sh@21 -- # case "$var" in 00:33:52.853 09:02:54 -- accel/accel.sh@19 -- # IFS=: 00:33:52.853 09:02:54 -- accel/accel.sh@19 -- # read -r var val 00:33:52.854 09:02:54 -- accel/accel.sh@20 -- # val= 00:33:52.854 09:02:54 -- accel/accel.sh@21 -- # case "$var" in 00:33:52.854 09:02:54 -- accel/accel.sh@19 -- # IFS=: 00:33:52.854 09:02:54 -- accel/accel.sh@19 -- # read -r var val 00:33:52.854 09:02:54 -- accel/accel.sh@20 -- # val= 00:33:52.854 09:02:54 -- accel/accel.sh@21 -- # case "$var" in 00:33:52.854 09:02:54 -- accel/accel.sh@19 -- # IFS=: 00:33:52.854 09:02:54 -- accel/accel.sh@19 -- # read -r var val 00:33:52.854 09:02:54 -- accel/accel.sh@20 -- # val= 00:33:52.854 09:02:54 -- accel/accel.sh@21 -- # case "$var" in 00:33:52.854 09:02:54 -- accel/accel.sh@19 -- # IFS=: 00:33:52.854 09:02:54 -- accel/accel.sh@19 -- # read -r var val 00:33:52.854 09:02:54 -- accel/accel.sh@27 -- # [[ -n software ]] 00:33:52.854 09:02:54 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:33:52.854 09:02:54 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:52.854 00:33:52.854 real 0m3.041s 00:33:52.854 user 0m2.726s 00:33:52.854 sys 0m0.206s 00:33:52.854 09:02:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:52.854 09:02:54 -- common/autotest_common.sh@10 -- # set +x 00:33:52.854 ************************************ 00:33:52.854 END TEST accel_decomp_mthread 00:33:52.854 ************************************ 00:33:52.854 09:02:54 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:33:52.854 09:02:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:33:52.854 09:02:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:52.854 09:02:54 -- common/autotest_common.sh@10 -- # set +x 00:33:52.854 ************************************ 00:33:52.854 START TEST accel_deomp_full_mthread 00:33:52.854 ************************************ 00:33:52.854 09:02:54 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:33:52.854 09:02:54 -- accel/accel.sh@16 -- # local accel_opc 00:33:52.854 09:02:54 -- accel/accel.sh@17 -- # local accel_module 00:33:52.854 09:02:54 -- accel/accel.sh@19 -- # IFS=: 00:33:52.854 09:02:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:33:52.854 09:02:54 -- accel/accel.sh@19 -- # read -r var val 00:33:52.854 09:02:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:33:52.854 09:02:54 -- accel/accel.sh@12 -- # build_accel_config 00:33:52.854 09:02:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:33:52.854 09:02:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:33:52.854 09:02:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:52.854 09:02:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:52.854 09:02:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:33:52.854 09:02:54 -- accel/accel.sh@40 -- # local IFS=, 00:33:52.854 09:02:54 -- accel/accel.sh@41 -- # jq -r . 00:33:52.854 [2024-04-18 09:02:54.678794] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:33:52.854 [2024-04-18 09:02:54.678926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65996 ] 00:33:52.854 [2024-04-18 09:02:54.850182] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.113 [2024-04-18 09:02:55.155923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.372 09:02:55 -- accel/accel.sh@20 -- # val= 00:33:53.372 09:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # IFS=: 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # read -r var val 00:33:53.372 09:02:55 -- accel/accel.sh@20 -- # val= 00:33:53.372 09:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # IFS=: 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # read -r var val 00:33:53.372 09:02:55 -- accel/accel.sh@20 -- # val= 00:33:53.372 09:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # IFS=: 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # read -r var val 00:33:53.372 09:02:55 -- accel/accel.sh@20 -- # val=0x1 00:33:53.372 09:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # IFS=: 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # read -r var val 00:33:53.372 09:02:55 -- accel/accel.sh@20 -- # val= 00:33:53.372 09:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # IFS=: 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # read -r var val 00:33:53.372 09:02:55 -- accel/accel.sh@20 -- # val= 00:33:53.372 09:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # IFS=: 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # read -r var val 00:33:53.372 09:02:55 -- accel/accel.sh@20 -- # val=decompress 00:33:53.372 09:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:33:53.372 09:02:55 -- accel/accel.sh@23 -- # accel_opc=decompress 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # IFS=: 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # read -r var val 00:33:53.372 09:02:55 -- accel/accel.sh@20 -- # val='111250 bytes' 00:33:53.372 09:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # IFS=: 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # read -r var val 00:33:53.372 09:02:55 -- accel/accel.sh@20 -- # val= 00:33:53.372 09:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # IFS=: 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # read -r var val 00:33:53.372 09:02:55 -- accel/accel.sh@20 -- # val=software 00:33:53.372 09:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:33:53.372 09:02:55 -- accel/accel.sh@22 -- # accel_module=software 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # IFS=: 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # read -r var val 00:33:53.372 09:02:55 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:33:53.372 09:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # IFS=: 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # read -r var val 00:33:53.372 09:02:55 -- accel/accel.sh@20 -- # val=32 00:33:53.372 09:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # IFS=: 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # read -r var val 00:33:53.372 09:02:55 -- accel/accel.sh@20 -- # val=32 00:33:53.372 09:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # IFS=: 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # read -r var val 00:33:53.372 09:02:55 -- accel/accel.sh@20 -- # val=2 00:33:53.372 09:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # IFS=: 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # read -r var val 00:33:53.372 09:02:55 -- accel/accel.sh@20 -- # val='1 seconds' 00:33:53.372 09:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # IFS=: 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # read -r var val 00:33:53.372 09:02:55 -- accel/accel.sh@20 -- # val=Yes 00:33:53.372 09:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # IFS=: 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # read -r var val 00:33:53.372 09:02:55 -- accel/accel.sh@20 -- # val= 00:33:53.372 09:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # IFS=: 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # read -r var val 00:33:53.372 09:02:55 -- accel/accel.sh@20 -- # val= 00:33:53.372 09:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # IFS=: 00:33:53.372 09:02:55 -- accel/accel.sh@19 -- # read -r var val 00:33:55.906 09:02:57 -- accel/accel.sh@20 -- # val= 00:33:55.906 09:02:57 -- accel/accel.sh@21 -- # case "$var" in 00:33:55.906 09:02:57 -- accel/accel.sh@19 -- # IFS=: 00:33:55.906 09:02:57 -- accel/accel.sh@19 -- # read -r var val 00:33:55.906 09:02:57 -- accel/accel.sh@20 -- # val= 00:33:55.906 09:02:57 -- accel/accel.sh@21 -- # case "$var" in 00:33:55.906 09:02:57 -- accel/accel.sh@19 -- # IFS=: 00:33:55.906 09:02:57 -- accel/accel.sh@19 -- # read -r var val 00:33:55.906 09:02:57 -- accel/accel.sh@20 -- # val= 00:33:55.906 09:02:57 -- accel/accel.sh@21 -- # case "$var" in 00:33:55.906 09:02:57 -- accel/accel.sh@19 -- # IFS=: 00:33:55.906 09:02:57 -- accel/accel.sh@19 -- # read -r var val 00:33:55.906 09:02:57 -- accel/accel.sh@20 -- # val= 00:33:55.906 09:02:57 -- accel/accel.sh@21 -- # case "$var" in 00:33:55.906 09:02:57 -- accel/accel.sh@19 -- # IFS=: 00:33:55.906 09:02:57 -- accel/accel.sh@19 -- # read -r var val 00:33:55.906 09:02:57 -- accel/accel.sh@20 -- # val= 00:33:55.906 09:02:57 -- accel/accel.sh@21 -- # case "$var" in 00:33:55.906 09:02:57 -- accel/accel.sh@19 -- # IFS=: 00:33:55.906 09:02:57 -- accel/accel.sh@19 -- # read -r var val 00:33:55.906 09:02:57 -- accel/accel.sh@20 -- # val= 00:33:55.906 09:02:57 -- accel/accel.sh@21 -- # case "$var" in 00:33:55.906 09:02:57 -- accel/accel.sh@19 -- # IFS=: 00:33:55.906 09:02:57 -- accel/accel.sh@19 -- # read -r var val 00:33:55.906 09:02:57 -- accel/accel.sh@20 -- # val= 00:33:55.906 09:02:57 -- accel/accel.sh@21 -- # case "$var" in 00:33:55.906 09:02:57 -- accel/accel.sh@19 -- # IFS=: 00:33:55.906 09:02:57 -- accel/accel.sh@19 -- # read -r var val 00:33:55.906 09:02:57 -- accel/accel.sh@27 -- # [[ -n software ]] 00:33:55.906 09:02:57 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:33:55.906 09:02:57 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:55.906 00:33:55.906 real 0m3.037s 00:33:55.906 user 0m2.734s 00:33:55.906 sys 0m0.197s 00:33:55.906 09:02:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:55.906 09:02:57 -- common/autotest_common.sh@10 -- # set +x 00:33:55.906 ************************************ 00:33:55.906 END TEST accel_deomp_full_mthread 00:33:55.906 ************************************ 00:33:55.906 09:02:57 -- accel/accel.sh@124 -- # [[ n == y ]] 00:33:55.906 09:02:57 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:33:55.906 09:02:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:33:55.906 09:02:57 -- accel/accel.sh@137 -- # build_accel_config 00:33:55.906 09:02:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:55.906 09:02:57 -- common/autotest_common.sh@10 -- # set +x 00:33:55.906 09:02:57 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:33:55.906 09:02:57 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:33:55.906 09:02:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:33:55.906 09:02:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:33:55.906 09:02:57 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:33:55.906 09:02:57 -- accel/accel.sh@40 -- # local IFS=, 00:33:55.906 09:02:57 -- accel/accel.sh@41 -- # jq -r . 00:33:55.906 ************************************ 00:33:55.906 START TEST accel_dif_functional_tests 00:33:55.906 ************************************ 00:33:55.906 09:02:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:33:55.906 [2024-04-18 09:02:57.861157] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:33:55.906 [2024-04-18 09:02:57.861290] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66049 ] 00:33:56.169 [2024-04-18 09:02:58.041973] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:56.427 [2024-04-18 09:02:58.385749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:56.427 [2024-04-18 09:02:58.385934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.427 [2024-04-18 09:02:58.385958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:56.994 00:33:56.994 00:33:56.994 CUnit - A unit testing framework for C - Version 2.1-3 00:33:56.994 http://cunit.sourceforge.net/ 00:33:56.994 00:33:56.994 00:33:56.994 Suite: accel_dif 00:33:56.994 Test: verify: DIF generated, GUARD check ...passed 00:33:56.994 Test: verify: DIF generated, APPTAG check ...passed 00:33:56.994 Test: verify: DIF generated, REFTAG check ...passed 00:33:56.994 Test: verify: DIF not generated, GUARD check ...[2024-04-18 09:02:58.822965] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:33:56.994 [2024-04-18 09:02:58.823062] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:33:56.994 passed 00:33:56.994 Test: verify: DIF not generated, APPTAG check ...passed 00:33:56.994 Test: verify: DIF not generated, REFTAG check ...[2024-04-18 09:02:58.823135] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:33:56.994 [2024-04-18 09:02:58.823194] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:33:56.994 passed 00:33:56.994 Test: verify: APPTAG correct, APPTAG check ...[2024-04-18 09:02:58.823246] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:33:56.994 [2024-04-18 09:02:58.823290] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:33:56.994 passed 00:33:56.994 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:33:56.994 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:33:56.994 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-04-18 09:02:58.823403] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:33:56.994 passed 00:33:56.994 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:33:56.994 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-18 09:02:58.823822] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:33:56.994 passed 00:33:56.994 Test: generate copy: DIF generated, GUARD check ...passed 00:33:56.994 Test: generate copy: DIF generated, APTTAG check ...passed 00:33:56.994 Test: generate copy: DIF generated, REFTAG check ...passed 00:33:56.994 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:33:56.994 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:33:56.994 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:33:56.994 Test: generate copy: iovecs-len validate ...[2024-04-18 09:02:58.825340] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:33:56.994 passed 00:33:56.994 Test: generate copy: buffer alignment validate ...passed 00:33:56.994 00:33:56.994 Run Summary: Type Total Ran Passed Failed Inactive 00:33:56.994 suites 1 1 n/a 0 0 00:33:56.994 tests 20 20 20 0 0 00:33:56.994 asserts 204 204 204 0 n/a 00:33:56.994 00:33:56.994 Elapsed time = 0.007 seconds 00:33:58.368 00:33:58.368 real 0m2.562s 00:33:58.368 user 0m4.953s 00:33:58.368 sys 0m0.294s 00:33:58.368 09:03:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:58.368 ************************************ 00:33:58.368 END TEST accel_dif_functional_tests 00:33:58.368 ************************************ 00:33:58.368 09:03:00 -- common/autotest_common.sh@10 -- # set +x 00:33:58.368 ************************************ 00:33:58.368 END TEST accel 00:33:58.368 ************************************ 00:33:58.368 00:33:58.368 real 1m14.633s 00:33:58.368 user 1m19.462s 00:33:58.368 sys 0m7.452s 00:33:58.368 09:03:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:58.368 09:03:00 -- common/autotest_common.sh@10 -- # set +x 00:33:58.368 09:03:00 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:33:58.368 09:03:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:58.368 09:03:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:58.368 09:03:00 -- common/autotest_common.sh@10 -- # set +x 00:33:58.626 ************************************ 00:33:58.626 START TEST accel_rpc 00:33:58.626 ************************************ 00:33:58.626 09:03:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:33:58.626 * Looking for test storage... 00:33:58.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:33:58.626 09:03:00 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:33:58.626 09:03:00 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=66141 00:33:58.626 09:03:00 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:33:58.626 09:03:00 -- accel/accel_rpc.sh@15 -- # waitforlisten 66141 00:33:58.626 09:03:00 -- common/autotest_common.sh@817 -- # '[' -z 66141 ']' 00:33:58.626 09:03:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:58.626 09:03:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:58.626 09:03:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:58.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:58.626 09:03:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:58.626 09:03:00 -- common/autotest_common.sh@10 -- # set +x 00:33:58.626 [2024-04-18 09:03:00.709043] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:33:58.626 [2024-04-18 09:03:00.709413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66141 ] 00:33:58.885 [2024-04-18 09:03:00.875159] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:59.143 [2024-04-18 09:03:01.128014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.712 09:03:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:59.712 09:03:01 -- common/autotest_common.sh@850 -- # return 0 00:33:59.712 09:03:01 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:33:59.712 09:03:01 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:33:59.712 09:03:01 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:33:59.712 09:03:01 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:33:59.712 09:03:01 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:33:59.712 09:03:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:59.712 09:03:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:59.712 09:03:01 -- common/autotest_common.sh@10 -- # set +x 00:33:59.712 ************************************ 00:33:59.712 START TEST accel_assign_opcode 00:33:59.712 ************************************ 00:33:59.712 09:03:01 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:33:59.712 09:03:01 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:33:59.712 09:03:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:59.712 09:03:01 -- common/autotest_common.sh@10 -- # set +x 00:33:59.712 [2024-04-18 09:03:01.597330] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:33:59.712 09:03:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:59.712 09:03:01 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:33:59.712 09:03:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:59.712 09:03:01 -- common/autotest_common.sh@10 -- # set +x 00:33:59.712 [2024-04-18 09:03:01.609288] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:33:59.712 09:03:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:59.712 09:03:01 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:33:59.712 09:03:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:59.712 09:03:01 -- common/autotest_common.sh@10 -- # set +x 00:34:00.647 09:03:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:00.647 09:03:02 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:34:00.647 09:03:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:00.647 09:03:02 -- common/autotest_common.sh@10 -- # set +x 00:34:00.647 09:03:02 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:34:00.647 09:03:02 -- accel/accel_rpc.sh@42 -- # grep software 00:34:00.647 09:03:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:00.647 software 00:34:00.647 ************************************ 00:34:00.647 END TEST accel_assign_opcode 00:34:00.647 ************************************ 00:34:00.647 00:34:00.647 real 0m1.088s 00:34:00.647 user 0m0.045s 00:34:00.647 sys 0m0.014s 00:34:00.647 09:03:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:00.647 09:03:02 -- common/autotest_common.sh@10 -- # set +x 00:34:00.647 09:03:02 -- accel/accel_rpc.sh@55 -- # killprocess 66141 00:34:00.647 09:03:02 -- common/autotest_common.sh@936 -- # '[' -z 66141 ']' 00:34:00.647 09:03:02 -- common/autotest_common.sh@940 -- # kill -0 66141 00:34:00.647 09:03:02 -- common/autotest_common.sh@941 -- # uname 00:34:00.647 09:03:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:00.647 09:03:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66141 00:34:00.906 killing process with pid 66141 00:34:00.906 09:03:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:00.906 09:03:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:00.906 09:03:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66141' 00:34:00.906 09:03:02 -- common/autotest_common.sh@955 -- # kill 66141 00:34:00.906 09:03:02 -- common/autotest_common.sh@960 -- # wait 66141 00:34:03.549 ************************************ 00:34:03.549 END TEST accel_rpc 00:34:03.549 ************************************ 00:34:03.549 00:34:03.549 real 0m5.081s 00:34:03.549 user 0m4.930s 00:34:03.549 sys 0m0.598s 00:34:03.549 09:03:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:03.549 09:03:05 -- common/autotest_common.sh@10 -- # set +x 00:34:03.549 09:03:05 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:34:03.549 09:03:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:03.549 09:03:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:03.549 09:03:05 -- common/autotest_common.sh@10 -- # set +x 00:34:03.807 ************************************ 00:34:03.807 START TEST app_cmdline 00:34:03.807 ************************************ 00:34:03.807 09:03:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:34:03.807 * Looking for test storage... 00:34:03.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:34:03.807 09:03:05 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:34:03.807 09:03:05 -- app/cmdline.sh@17 -- # spdk_tgt_pid=66284 00:34:03.807 09:03:05 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:34:03.807 09:03:05 -- app/cmdline.sh@18 -- # waitforlisten 66284 00:34:03.807 09:03:05 -- common/autotest_common.sh@817 -- # '[' -z 66284 ']' 00:34:03.807 09:03:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:03.807 09:03:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:03.807 09:03:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:03.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:03.807 09:03:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:03.807 09:03:05 -- common/autotest_common.sh@10 -- # set +x 00:34:04.065 [2024-04-18 09:03:05.919483] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:34:04.065 [2024-04-18 09:03:05.919863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66284 ] 00:34:04.065 [2024-04-18 09:03:06.085442] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.322 [2024-04-18 09:03:06.352033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.697 09:03:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:05.697 09:03:07 -- common/autotest_common.sh@850 -- # return 0 00:34:05.697 09:03:07 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:34:05.697 { 00:34:05.697 "version": "SPDK v24.05-pre git sha1 ca13e8d81", 00:34:05.697 "fields": { 00:34:05.697 "major": 24, 00:34:05.697 "minor": 5, 00:34:05.697 "patch": 0, 00:34:05.697 "suffix": "-pre", 00:34:05.697 "commit": "ca13e8d81" 00:34:05.697 } 00:34:05.697 } 00:34:05.697 09:03:07 -- app/cmdline.sh@22 -- # expected_methods=() 00:34:05.697 09:03:07 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:34:05.697 09:03:07 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:34:05.697 09:03:07 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:34:05.697 09:03:07 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:34:05.697 09:03:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:05.697 09:03:07 -- common/autotest_common.sh@10 -- # set +x 00:34:05.697 09:03:07 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:34:05.697 09:03:07 -- app/cmdline.sh@26 -- # sort 00:34:05.697 09:03:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:05.697 09:03:07 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:34:05.697 09:03:07 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:34:05.697 09:03:07 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:34:05.697 09:03:07 -- common/autotest_common.sh@638 -- # local es=0 00:34:05.697 09:03:07 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:34:05.697 09:03:07 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:05.697 09:03:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:34:05.697 09:03:07 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:05.697 09:03:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:34:05.697 09:03:07 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:05.697 09:03:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:34:05.697 09:03:07 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:05.697 09:03:07 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:05.697 09:03:07 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:34:06.263 request: 00:34:06.263 { 00:34:06.263 "method": "env_dpdk_get_mem_stats", 00:34:06.264 "req_id": 1 00:34:06.264 } 00:34:06.264 Got JSON-RPC error response 00:34:06.264 response: 00:34:06.264 { 00:34:06.264 "code": -32601, 00:34:06.264 "message": "Method not found" 00:34:06.264 } 00:34:06.264 09:03:08 -- common/autotest_common.sh@641 -- # es=1 00:34:06.264 09:03:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:34:06.264 09:03:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:34:06.264 09:03:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:34:06.264 09:03:08 -- app/cmdline.sh@1 -- # killprocess 66284 00:34:06.264 09:03:08 -- common/autotest_common.sh@936 -- # '[' -z 66284 ']' 00:34:06.264 09:03:08 -- common/autotest_common.sh@940 -- # kill -0 66284 00:34:06.264 09:03:08 -- common/autotest_common.sh@941 -- # uname 00:34:06.264 09:03:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:06.264 09:03:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66284 00:34:06.264 09:03:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:06.264 09:03:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:06.264 09:03:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66284' 00:34:06.264 killing process with pid 66284 00:34:06.264 09:03:08 -- common/autotest_common.sh@955 -- # kill 66284 00:34:06.264 09:03:08 -- common/autotest_common.sh@960 -- # wait 66284 00:34:09.549 00:34:09.549 real 0m5.278s 00:34:09.549 user 0m5.660s 00:34:09.549 sys 0m0.660s 00:34:09.549 09:03:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:09.549 ************************************ 00:34:09.549 END TEST app_cmdline 00:34:09.549 ************************************ 00:34:09.549 09:03:10 -- common/autotest_common.sh@10 -- # set +x 00:34:09.549 09:03:11 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:34:09.549 09:03:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:09.549 09:03:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:09.549 09:03:11 -- common/autotest_common.sh@10 -- # set +x 00:34:09.549 ************************************ 00:34:09.549 START TEST version 00:34:09.549 ************************************ 00:34:09.549 09:03:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:34:09.549 * Looking for test storage... 00:34:09.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:34:09.549 09:03:11 -- app/version.sh@17 -- # get_header_version major 00:34:09.549 09:03:11 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:34:09.549 09:03:11 -- app/version.sh@14 -- # tr -d '"' 00:34:09.549 09:03:11 -- app/version.sh@14 -- # cut -f2 00:34:09.549 09:03:11 -- app/version.sh@17 -- # major=24 00:34:09.549 09:03:11 -- app/version.sh@18 -- # get_header_version minor 00:34:09.549 09:03:11 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:34:09.549 09:03:11 -- app/version.sh@14 -- # cut -f2 00:34:09.549 09:03:11 -- app/version.sh@14 -- # tr -d '"' 00:34:09.549 09:03:11 -- app/version.sh@18 -- # minor=5 00:34:09.549 09:03:11 -- app/version.sh@19 -- # get_header_version patch 00:34:09.549 09:03:11 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:34:09.549 09:03:11 -- app/version.sh@14 -- # cut -f2 00:34:09.549 09:03:11 -- app/version.sh@14 -- # tr -d '"' 00:34:09.549 09:03:11 -- app/version.sh@19 -- # patch=0 00:34:09.549 09:03:11 -- app/version.sh@20 -- # get_header_version suffix 00:34:09.549 09:03:11 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:34:09.549 09:03:11 -- app/version.sh@14 -- # cut -f2 00:34:09.549 09:03:11 -- app/version.sh@14 -- # tr -d '"' 00:34:09.549 09:03:11 -- app/version.sh@20 -- # suffix=-pre 00:34:09.549 09:03:11 -- app/version.sh@22 -- # version=24.5 00:34:09.549 09:03:11 -- app/version.sh@25 -- # (( patch != 0 )) 00:34:09.549 09:03:11 -- app/version.sh@28 -- # version=24.5rc0 00:34:09.549 09:03:11 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:34:09.549 09:03:11 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:34:09.549 09:03:11 -- app/version.sh@30 -- # py_version=24.5rc0 00:34:09.549 09:03:11 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:34:09.549 00:34:09.549 real 0m0.190s 00:34:09.549 user 0m0.116s 00:34:09.549 sys 0m0.108s 00:34:09.549 ************************************ 00:34:09.549 END TEST version 00:34:09.549 ************************************ 00:34:09.549 09:03:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:09.549 09:03:11 -- common/autotest_common.sh@10 -- # set +x 00:34:09.549 09:03:11 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:34:09.549 09:03:11 -- spdk/autotest.sh@194 -- # uname -s 00:34:09.549 09:03:11 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:34:09.549 09:03:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:34:09.549 09:03:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:34:09.549 09:03:11 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:34:09.549 09:03:11 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:34:09.549 09:03:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:34:09.549 09:03:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:09.549 09:03:11 -- common/autotest_common.sh@10 -- # set +x 00:34:09.549 ************************************ 00:34:09.549 START TEST blockdev_nvme 00:34:09.549 ************************************ 00:34:09.549 09:03:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:34:09.549 * Looking for test storage... 00:34:09.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:34:09.549 09:03:11 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:34:09.549 09:03:11 -- bdev/nbd_common.sh@6 -- # set -e 00:34:09.549 09:03:11 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:34:09.549 09:03:11 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:09.549 09:03:11 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:34:09.549 09:03:11 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:34:09.549 09:03:11 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:34:09.549 09:03:11 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:34:09.549 09:03:11 -- bdev/blockdev.sh@20 -- # : 00:34:09.549 09:03:11 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:34:09.549 09:03:11 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:34:09.549 09:03:11 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:34:09.549 09:03:11 -- bdev/blockdev.sh@674 -- # uname -s 00:34:09.549 09:03:11 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:34:09.549 09:03:11 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:34:09.549 09:03:11 -- bdev/blockdev.sh@682 -- # test_type=nvme 00:34:09.549 09:03:11 -- bdev/blockdev.sh@683 -- # crypto_device= 00:34:09.549 09:03:11 -- bdev/blockdev.sh@684 -- # dek= 00:34:09.549 09:03:11 -- bdev/blockdev.sh@685 -- # env_ctx= 00:34:09.549 09:03:11 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:34:09.549 09:03:11 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:34:09.549 09:03:11 -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:34:09.549 09:03:11 -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:34:09.549 09:03:11 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:34:09.549 09:03:11 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66474 00:34:09.549 09:03:11 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:34:09.549 09:03:11 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:34:09.549 09:03:11 -- bdev/blockdev.sh@49 -- # waitforlisten 66474 00:34:09.549 09:03:11 -- common/autotest_common.sh@817 -- # '[' -z 66474 ']' 00:34:09.549 09:03:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:09.549 09:03:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:09.549 09:03:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:09.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:09.549 09:03:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:09.549 09:03:11 -- common/autotest_common.sh@10 -- # set +x 00:34:09.549 [2024-04-18 09:03:11.631419] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:34:09.549 [2024-04-18 09:03:11.631764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66474 ] 00:34:09.807 [2024-04-18 09:03:11.808025] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.066 [2024-04-18 09:03:12.157558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.441 09:03:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:11.441 09:03:13 -- common/autotest_common.sh@850 -- # return 0 00:34:11.441 09:03:13 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:34:11.441 09:03:13 -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:34:11.441 09:03:13 -- bdev/blockdev.sh@81 -- # local json 00:34:11.441 09:03:13 -- bdev/blockdev.sh@82 -- # mapfile -t json 00:34:11.441 09:03:13 -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:34:11.441 09:03:13 -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:34:11.441 09:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:11.441 09:03:13 -- common/autotest_common.sh@10 -- # set +x 00:34:11.700 09:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:11.700 09:03:13 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:34:11.700 09:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:11.700 09:03:13 -- common/autotest_common.sh@10 -- # set +x 00:34:11.700 09:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:11.700 09:03:13 -- bdev/blockdev.sh@740 -- # cat 00:34:11.700 09:03:13 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:34:11.700 09:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:11.700 09:03:13 -- common/autotest_common.sh@10 -- # set +x 00:34:11.700 09:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:11.700 09:03:13 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:34:11.700 09:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:11.700 09:03:13 -- common/autotest_common.sh@10 -- # set +x 00:34:11.700 09:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:11.700 09:03:13 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:34:11.700 09:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:11.700 09:03:13 -- common/autotest_common.sh@10 -- # set +x 00:34:11.700 09:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:11.700 09:03:13 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:34:11.700 09:03:13 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:34:11.700 09:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:11.700 09:03:13 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:34:11.700 09:03:13 -- common/autotest_common.sh@10 -- # set +x 00:34:11.700 09:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:11.700 09:03:13 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:34:11.700 09:03:13 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "0715c44b-a6df-4435-92f6-bee109484dee"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0715c44b-a6df-4435-92f6-bee109484dee",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "69461da4-5f83-4e17-9f2c-cab19de29fb3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "69461da4-5f83-4e17-9f2c-cab19de29fb3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "b77fd751-5e2b-42e5-be39-ce5f00940e83"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b77fd751-5e2b-42e5-be39-ce5f00940e83",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "c73c8511-2973-48d8-b34f-b50628ce6aa5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c73c8511-2973-48d8-b34f-b50628ce6aa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "71937f89-d437-4ad4-a311-25115c8f05f0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "71937f89-d437-4ad4-a311-25115c8f05f0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d8555016-abc9-4763-beac-9935d2450d5f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d8555016-abc9-4763-beac-9935d2450d5f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:34:11.700 09:03:13 -- bdev/blockdev.sh@749 -- # jq -r .name 00:34:11.700 09:03:13 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:34:11.700 09:03:13 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:34:11.700 09:03:13 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:34:11.700 09:03:13 -- bdev/blockdev.sh@754 -- # killprocess 66474 00:34:11.700 09:03:13 -- common/autotest_common.sh@936 -- # '[' -z 66474 ']' 00:34:11.700 09:03:13 -- common/autotest_common.sh@940 -- # kill -0 66474 00:34:11.700 09:03:13 -- common/autotest_common.sh@941 -- # uname 00:34:11.700 09:03:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:11.700 09:03:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66474 00:34:11.960 09:03:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:11.960 09:03:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:11.960 killing process with pid 66474 00:34:11.960 09:03:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66474' 00:34:11.960 09:03:13 -- common/autotest_common.sh@955 -- # kill 66474 00:34:11.960 09:03:13 -- common/autotest_common.sh@960 -- # wait 66474 00:34:14.492 09:03:16 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:14.492 09:03:16 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:34:14.492 09:03:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:34:14.492 09:03:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:14.492 09:03:16 -- common/autotest_common.sh@10 -- # set +x 00:34:14.749 ************************************ 00:34:14.749 START TEST bdev_hello_world 00:34:14.749 ************************************ 00:34:14.749 09:03:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:34:14.749 [2024-04-18 09:03:16.733431] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:34:14.749 [2024-04-18 09:03:16.733557] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66584 ] 00:34:15.007 [2024-04-18 09:03:16.901693] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.266 [2024-04-18 09:03:17.171205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.197 [2024-04-18 09:03:17.932603] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:34:16.197 [2024-04-18 09:03:17.932662] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:34:16.197 [2024-04-18 09:03:17.932697] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:34:16.197 [2024-04-18 09:03:17.936167] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:34:16.197 [2024-04-18 09:03:17.936917] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:34:16.197 [2024-04-18 09:03:17.936953] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:34:16.197 [2024-04-18 09:03:17.937103] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:34:16.197 00:34:16.197 [2024-04-18 09:03:17.937131] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:34:17.571 00:34:17.571 real 0m2.766s 00:34:17.571 user 0m2.379s 00:34:17.571 sys 0m0.271s 00:34:17.571 ************************************ 00:34:17.571 END TEST bdev_hello_world 00:34:17.571 ************************************ 00:34:17.571 09:03:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:17.571 09:03:19 -- common/autotest_common.sh@10 -- # set +x 00:34:17.571 09:03:19 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:34:17.571 09:03:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:34:17.571 09:03:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:17.571 09:03:19 -- common/autotest_common.sh@10 -- # set +x 00:34:17.571 ************************************ 00:34:17.571 START TEST bdev_bounds 00:34:17.571 ************************************ 00:34:17.571 09:03:19 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:34:17.571 09:03:19 -- bdev/blockdev.sh@290 -- # bdevio_pid=66637 00:34:17.571 09:03:19 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:34:17.572 09:03:19 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:34:17.572 09:03:19 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 66637' 00:34:17.572 Process bdevio pid: 66637 00:34:17.572 09:03:19 -- bdev/blockdev.sh@293 -- # waitforlisten 66637 00:34:17.572 09:03:19 -- common/autotest_common.sh@817 -- # '[' -z 66637 ']' 00:34:17.572 09:03:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.572 09:03:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:17.572 09:03:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.572 09:03:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:17.572 09:03:19 -- common/autotest_common.sh@10 -- # set +x 00:34:17.572 [2024-04-18 09:03:19.655072] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:34:17.572 [2024-04-18 09:03:19.655588] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66637 ] 00:34:17.830 [2024-04-18 09:03:19.833353] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:18.088 [2024-04-18 09:03:20.184576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:18.088 [2024-04-18 09:03:20.184637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.088 [2024-04-18 09:03:20.184642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:19.024 09:03:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:19.024 09:03:21 -- common/autotest_common.sh@850 -- # return 0 00:34:19.024 09:03:21 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:34:19.283 I/O targets: 00:34:19.283 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:34:19.283 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:34:19.283 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:34:19.283 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:34:19.283 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:34:19.283 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:34:19.283 00:34:19.283 00:34:19.283 CUnit - A unit testing framework for C - Version 2.1-3 00:34:19.283 http://cunit.sourceforge.net/ 00:34:19.283 00:34:19.283 00:34:19.283 Suite: bdevio tests on: Nvme3n1 00:34:19.283 Test: blockdev write read block ...passed 00:34:19.283 Test: blockdev write zeroes read block ...passed 00:34:19.283 Test: blockdev write zeroes read no split ...passed 00:34:19.283 Test: blockdev write zeroes read split ...passed 00:34:19.283 Test: blockdev write zeroes read split partial ...passed 00:34:19.283 Test: blockdev reset ...[2024-04-18 09:03:21.241490] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:34:19.283 [2024-04-18 09:03:21.245640] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:19.283 passed 00:34:19.283 Test: blockdev write read 8 blocks ...passed 00:34:19.283 Test: blockdev write read size > 128k ...passed 00:34:19.283 Test: blockdev write read invalid size ...passed 00:34:19.283 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:19.283 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:19.283 Test: blockdev write read max offset ...passed 00:34:19.283 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:19.283 Test: blockdev writev readv 8 blocks ...passed 00:34:19.283 Test: blockdev writev readv 30 x 1block ...passed 00:34:19.283 Test: blockdev writev readv block ...passed 00:34:19.283 Test: blockdev writev readv size > 128k ...passed 00:34:19.283 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:19.283 Test: blockdev comparev and writev ...[2024-04-18 09:03:21.256088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1ff60e000 len:0x1000 00:34:19.283 [2024-04-18 09:03:21.256316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:34:19.283 passed 00:34:19.283 Test: blockdev nvme passthru rw ...passed 00:34:19.283 Test: blockdev nvme passthru vendor specific ...[2024-04-18 09:03:21.257603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:34:19.283 [2024-04-18 09:03:21.257759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:34:19.283 passed 00:34:19.283 Test: blockdev nvme admin passthru ...passed 00:34:19.283 Test: blockdev copy ...passed 00:34:19.283 Suite: bdevio tests on: Nvme2n3 00:34:19.283 Test: blockdev write read block ...passed 00:34:19.283 Test: blockdev write zeroes read block ...passed 00:34:19.283 Test: blockdev write zeroes read no split ...passed 00:34:19.283 Test: blockdev write zeroes read split ...passed 00:34:19.283 Test: blockdev write zeroes read split partial ...passed 00:34:19.283 Test: blockdev reset ...[2024-04-18 09:03:21.350249] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:34:19.283 [2024-04-18 09:03:21.354504] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:19.283 passed 00:34:19.283 Test: blockdev write read 8 blocks ...passed 00:34:19.283 Test: blockdev write read size > 128k ...passed 00:34:19.283 Test: blockdev write read invalid size ...passed 00:34:19.283 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:19.283 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:19.283 Test: blockdev write read max offset ...passed 00:34:19.283 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:19.283 Test: blockdev writev readv 8 blocks ...passed 00:34:19.283 Test: blockdev writev readv 30 x 1block ...passed 00:34:19.283 Test: blockdev writev readv block ...passed 00:34:19.283 Test: blockdev writev readv size > 128k ...passed 00:34:19.283 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:19.283 Test: blockdev comparev and writev ...[2024-04-18 09:03:21.364675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1ff60a000 len:0x1000 00:34:19.283 [2024-04-18 09:03:21.364897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:34:19.283 passed 00:34:19.283 Test: blockdev nvme passthru rw ...passed 00:34:19.283 Test: blockdev nvme passthru vendor specific ...[2024-04-18 09:03:21.366017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:34:19.283 [2024-04-18 09:03:21.366160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:34:19.283 passed 00:34:19.284 Test: blockdev nvme admin passthru ...passed 00:34:19.284 Test: blockdev copy ...passed 00:34:19.284 Suite: bdevio tests on: Nvme2n2 00:34:19.284 Test: blockdev write read block ...passed 00:34:19.284 Test: blockdev write zeroes read block ...passed 00:34:19.284 Test: blockdev write zeroes read no split ...passed 00:34:19.542 Test: blockdev write zeroes read split ...passed 00:34:19.542 Test: blockdev write zeroes read split partial ...passed 00:34:19.542 Test: blockdev reset ...[2024-04-18 09:03:21.456378] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:34:19.542 [2024-04-18 09:03:21.460462] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:19.542 passed 00:34:19.542 Test: blockdev write read 8 blocks ...passed 00:34:19.542 Test: blockdev write read size > 128k ...passed 00:34:19.542 Test: blockdev write read invalid size ...passed 00:34:19.542 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:19.542 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:19.542 Test: blockdev write read max offset ...passed 00:34:19.542 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:19.542 Test: blockdev writev readv 8 blocks ...passed 00:34:19.542 Test: blockdev writev readv 30 x 1block ...passed 00:34:19.542 Test: blockdev writev readv block ...passed 00:34:19.542 Test: blockdev writev readv size > 128k ...passed 00:34:19.542 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:19.542 Test: blockdev comparev and writev ...[2024-04-18 09:03:21.469669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1df206000 len:0x1000 00:34:19.542 [2024-04-18 09:03:21.469886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:34:19.542 passed 00:34:19.542 Test: blockdev nvme passthru rw ...passed 00:34:19.542 Test: blockdev nvme passthru vendor specific ...[2024-04-18 09:03:21.470937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:34:19.542 [2024-04-18 09:03:21.471119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:34:19.542 passed 00:34:19.542 Test: blockdev nvme admin passthru ...passed 00:34:19.542 Test: blockdev copy ...passed 00:34:19.542 Suite: bdevio tests on: Nvme2n1 00:34:19.542 Test: blockdev write read block ...passed 00:34:19.542 Test: blockdev write zeroes read block ...passed 00:34:19.542 Test: blockdev write zeroes read no split ...passed 00:34:19.542 Test: blockdev write zeroes read split ...passed 00:34:19.542 Test: blockdev write zeroes read split partial ...passed 00:34:19.542 Test: blockdev reset ...[2024-04-18 09:03:21.571407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:34:19.542 [2024-04-18 09:03:21.576016] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:19.542 passed 00:34:19.542 Test: blockdev write read 8 blocks ...passed 00:34:19.542 Test: blockdev write read size > 128k ...passed 00:34:19.542 Test: blockdev write read invalid size ...passed 00:34:19.542 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:19.542 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:19.542 Test: blockdev write read max offset ...passed 00:34:19.542 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:19.542 Test: blockdev writev readv 8 blocks ...passed 00:34:19.542 Test: blockdev writev readv 30 x 1block ...passed 00:34:19.542 Test: blockdev writev readv block ...passed 00:34:19.542 Test: blockdev writev readv size > 128k ...passed 00:34:19.542 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:19.542 Test: blockdev comparev and writev ...[2024-04-18 09:03:21.585351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1df201000 len:0x1000 00:34:19.543 [2024-04-18 09:03:21.585592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:34:19.543 passed 00:34:19.543 Test: blockdev nvme passthru rw ...passed 00:34:19.543 Test: blockdev nvme passthru vendor specific ...[2024-04-18 09:03:21.586662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:34:19.543 [2024-04-18 09:03:21.586820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:34:19.543 00:34:19.543 Test: blockdev nvme admin passthru ...passed 00:34:19.543 Test: blockdev copy ...passed 00:34:19.543 Suite: bdevio tests on: Nvme1n1 00:34:19.543 Test: blockdev write read block ...passed 00:34:19.543 Test: blockdev write zeroes read block ...passed 00:34:19.543 Test: blockdev write zeroes read no split ...passed 00:34:19.543 Test: blockdev write zeroes read split ...passed 00:34:19.802 Test: blockdev write zeroes read split partial ...passed 00:34:19.802 Test: blockdev reset ...[2024-04-18 09:03:21.676629] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:34:19.802 [2024-04-18 09:03:21.680900] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:19.802 passed 00:34:19.802 Test: blockdev write read 8 blocks ...passed 00:34:19.802 Test: blockdev write read size > 128k ...passed 00:34:19.802 Test: blockdev write read invalid size ...passed 00:34:19.802 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:19.802 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:19.802 Test: blockdev write read max offset ...passed 00:34:19.802 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:19.802 Test: blockdev writev readv 8 blocks ...passed 00:34:19.802 Test: blockdev writev readv 30 x 1block ...passed 00:34:19.802 Test: blockdev writev readv block ...passed 00:34:19.802 Test: blockdev writev readv size > 128k ...passed 00:34:19.802 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:19.802 Test: blockdev comparev and writev ...[2024-04-18 09:03:21.689548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1ff206000 len:0x1000 00:34:19.802 [2024-04-18 09:03:21.689781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:34:19.802 passed 00:34:19.802 Test: blockdev nvme passthru rw ...passed 00:34:19.802 Test: blockdev nvme passthru vendor specific ...[2024-04-18 09:03:21.690745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:34:19.802 [2024-04-18 09:03:21.690906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:34:19.802 passed 00:34:19.802 Test: blockdev nvme admin passthru ...passed 00:34:19.802 Test: blockdev copy ...passed 00:34:19.802 Suite: bdevio tests on: Nvme0n1 00:34:19.802 Test: blockdev write read block ...passed 00:34:19.802 Test: blockdev write zeroes read block ...passed 00:34:19.802 Test: blockdev write zeroes read no split ...passed 00:34:19.802 Test: blockdev write zeroes read split ...passed 00:34:19.802 Test: blockdev write zeroes read split partial ...passed 00:34:19.802 Test: blockdev reset ...[2024-04-18 09:03:21.782592] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:34:19.802 [2024-04-18 09:03:21.786962] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:19.802 passed 00:34:19.802 Test: blockdev write read 8 blocks ...passed 00:34:19.802 Test: blockdev write read size > 128k ...passed 00:34:19.802 Test: blockdev write read invalid size ...passed 00:34:19.802 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:19.802 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:19.802 Test: blockdev write read max offset ...passed 00:34:19.802 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:19.802 Test: blockdev writev readv 8 blocks ...passed 00:34:19.802 Test: blockdev writev readv 30 x 1block ...passed 00:34:19.802 Test: blockdev writev readv block ...passed 00:34:19.802 Test: blockdev writev readv size > 128k ...passed 00:34:19.802 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:19.802 Test: blockdev comparev and writev ...[2024-04-18 09:03:21.794832] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:34:19.802 separate metadata which is not supported yet. 00:34:19.802 passed 00:34:19.802 Test: blockdev nvme passthru rw ...passed 00:34:19.802 Test: blockdev nvme passthru vendor specific ...[2024-04-18 09:03:21.795721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:34:19.802 [2024-04-18 09:03:21.795906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:34:19.802 passed 00:34:19.802 Test: blockdev nvme admin passthru ...passed 00:34:19.802 Test: blockdev copy ...passed 00:34:19.802 00:34:19.802 Run Summary: Type Total Ran Passed Failed Inactive 00:34:19.802 suites 6 6 n/a 0 0 00:34:19.802 tests 138 138 138 0 0 00:34:19.802 asserts 893 893 893 0 n/a 00:34:19.802 00:34:19.802 Elapsed time = 1.776 seconds 00:34:19.802 0 00:34:19.802 09:03:21 -- bdev/blockdev.sh@295 -- # killprocess 66637 00:34:19.802 09:03:21 -- common/autotest_common.sh@936 -- # '[' -z 66637 ']' 00:34:19.802 09:03:21 -- common/autotest_common.sh@940 -- # kill -0 66637 00:34:19.802 09:03:21 -- common/autotest_common.sh@941 -- # uname 00:34:19.802 09:03:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:19.802 09:03:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66637 00:34:19.802 09:03:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:19.802 killing process with pid 66637 00:34:19.802 09:03:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:19.802 09:03:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66637' 00:34:19.802 09:03:21 -- common/autotest_common.sh@955 -- # kill 66637 00:34:19.802 09:03:21 -- common/autotest_common.sh@960 -- # wait 66637 00:34:21.179 ************************************ 00:34:21.179 END TEST bdev_bounds 00:34:21.179 ************************************ 00:34:21.179 09:03:23 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:34:21.179 00:34:21.179 real 0m3.559s 00:34:21.179 user 0m8.571s 00:34:21.179 sys 0m0.461s 00:34:21.179 09:03:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:21.179 09:03:23 -- common/autotest_common.sh@10 -- # set +x 00:34:21.179 09:03:23 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:34:21.179 09:03:23 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:34:21.179 09:03:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:21.179 09:03:23 -- common/autotest_common.sh@10 -- # set +x 00:34:21.179 ************************************ 00:34:21.179 START TEST bdev_nbd 00:34:21.179 ************************************ 00:34:21.179 09:03:23 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:34:21.179 09:03:23 -- bdev/blockdev.sh@300 -- # uname -s 00:34:21.179 09:03:23 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:34:21.179 09:03:23 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:21.179 09:03:23 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:21.179 09:03:23 -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:34:21.179 09:03:23 -- bdev/blockdev.sh@304 -- # local bdev_all 00:34:21.179 09:03:23 -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:34:21.179 09:03:23 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:34:21.179 09:03:23 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:34:21.179 09:03:23 -- bdev/blockdev.sh@311 -- # local nbd_all 00:34:21.179 09:03:23 -- bdev/blockdev.sh@312 -- # bdev_num=6 00:34:21.179 09:03:23 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:21.179 09:03:23 -- bdev/blockdev.sh@314 -- # local nbd_list 00:34:21.179 09:03:23 -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:34:21.179 09:03:23 -- bdev/blockdev.sh@315 -- # local bdev_list 00:34:21.179 09:03:23 -- bdev/blockdev.sh@318 -- # nbd_pid=66717 00:34:21.179 09:03:23 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:34:21.179 09:03:23 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:34:21.179 09:03:23 -- bdev/blockdev.sh@320 -- # waitforlisten 66717 /var/tmp/spdk-nbd.sock 00:34:21.179 09:03:23 -- common/autotest_common.sh@817 -- # '[' -z 66717 ']' 00:34:21.179 09:03:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:34:21.179 09:03:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:21.179 09:03:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:34:21.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:34:21.179 09:03:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:21.179 09:03:23 -- common/autotest_common.sh@10 -- # set +x 00:34:21.437 [2024-04-18 09:03:23.325752] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:34:21.437 [2024-04-18 09:03:23.326096] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:21.437 [2024-04-18 09:03:23.500698] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.004 [2024-04-18 09:03:23.859374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:23.006 09:03:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:23.006 09:03:24 -- common/autotest_common.sh@850 -- # return 0 00:34:23.006 09:03:24 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:34:23.006 09:03:24 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:23.006 09:03:24 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:34:23.006 09:03:24 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:34:23.006 09:03:24 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:34:23.006 09:03:24 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:23.006 09:03:24 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:34:23.006 09:03:24 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:34:23.006 09:03:24 -- bdev/nbd_common.sh@24 -- # local i 00:34:23.006 09:03:24 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:34:23.006 09:03:24 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:34:23.006 09:03:24 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:23.006 09:03:24 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:34:23.006 09:03:25 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:34:23.006 09:03:25 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:34:23.006 09:03:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:34:23.006 09:03:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:34:23.006 09:03:25 -- common/autotest_common.sh@855 -- # local i 00:34:23.006 09:03:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:34:23.006 09:03:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:34:23.006 09:03:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:34:23.006 09:03:25 -- common/autotest_common.sh@859 -- # break 00:34:23.006 09:03:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:23.006 09:03:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:23.006 09:03:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:23.006 1+0 records in 00:34:23.006 1+0 records out 00:34:23.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000782324 s, 5.2 MB/s 00:34:23.006 09:03:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:23.006 09:03:25 -- common/autotest_common.sh@872 -- # size=4096 00:34:23.006 09:03:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:23.006 09:03:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:34:23.006 09:03:25 -- common/autotest_common.sh@875 -- # return 0 00:34:23.006 09:03:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:23.006 09:03:25 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:23.006 09:03:25 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:34:23.264 09:03:25 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:34:23.264 09:03:25 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:34:23.264 09:03:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:34:23.264 09:03:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:34:23.264 09:03:25 -- common/autotest_common.sh@855 -- # local i 00:34:23.264 09:03:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:34:23.264 09:03:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:34:23.264 09:03:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:34:23.264 09:03:25 -- common/autotest_common.sh@859 -- # break 00:34:23.264 09:03:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:23.264 09:03:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:23.264 09:03:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:23.264 1+0 records in 00:34:23.264 1+0 records out 00:34:23.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704834 s, 5.8 MB/s 00:34:23.264 09:03:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:23.264 09:03:25 -- common/autotest_common.sh@872 -- # size=4096 00:34:23.264 09:03:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:23.264 09:03:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:34:23.264 09:03:25 -- common/autotest_common.sh@875 -- # return 0 00:34:23.264 09:03:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:23.264 09:03:25 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:23.264 09:03:25 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:34:23.522 09:03:25 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:34:23.522 09:03:25 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:34:23.522 09:03:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:34:23.522 09:03:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:34:23.522 09:03:25 -- common/autotest_common.sh@855 -- # local i 00:34:23.522 09:03:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:34:23.522 09:03:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:34:23.522 09:03:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:34:23.522 09:03:25 -- common/autotest_common.sh@859 -- # break 00:34:23.522 09:03:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:23.522 09:03:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:23.522 09:03:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:23.522 1+0 records in 00:34:23.522 1+0 records out 00:34:23.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000857858 s, 4.8 MB/s 00:34:23.522 09:03:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:23.522 09:03:25 -- common/autotest_common.sh@872 -- # size=4096 00:34:23.522 09:03:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:23.522 09:03:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:34:23.522 09:03:25 -- common/autotest_common.sh@875 -- # return 0 00:34:23.522 09:03:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:23.522 09:03:25 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:23.522 09:03:25 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:34:24.089 09:03:25 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:34:24.089 09:03:25 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:34:24.089 09:03:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:34:24.089 09:03:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:34:24.089 09:03:25 -- common/autotest_common.sh@855 -- # local i 00:34:24.089 09:03:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:34:24.089 09:03:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:34:24.089 09:03:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:34:24.089 09:03:25 -- common/autotest_common.sh@859 -- # break 00:34:24.089 09:03:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:24.089 09:03:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:24.089 09:03:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:24.089 1+0 records in 00:34:24.089 1+0 records out 00:34:24.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106439 s, 3.8 MB/s 00:34:24.089 09:03:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:24.089 09:03:25 -- common/autotest_common.sh@872 -- # size=4096 00:34:24.089 09:03:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:24.089 09:03:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:34:24.089 09:03:25 -- common/autotest_common.sh@875 -- # return 0 00:34:24.089 09:03:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:24.089 09:03:25 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:24.089 09:03:25 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:34:24.347 09:03:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:34:24.347 09:03:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:34:24.347 09:03:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:34:24.347 09:03:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:34:24.347 09:03:26 -- common/autotest_common.sh@855 -- # local i 00:34:24.347 09:03:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:34:24.347 09:03:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:34:24.347 09:03:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:34:24.347 09:03:26 -- common/autotest_common.sh@859 -- # break 00:34:24.347 09:03:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:24.347 09:03:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:24.347 09:03:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:24.347 1+0 records in 00:34:24.347 1+0 records out 00:34:24.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000824653 s, 5.0 MB/s 00:34:24.347 09:03:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:24.347 09:03:26 -- common/autotest_common.sh@872 -- # size=4096 00:34:24.347 09:03:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:24.347 09:03:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:34:24.347 09:03:26 -- common/autotest_common.sh@875 -- # return 0 00:34:24.347 09:03:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:24.347 09:03:26 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:24.347 09:03:26 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:34:24.606 09:03:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:34:24.606 09:03:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:34:24.606 09:03:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:34:24.606 09:03:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:34:24.606 09:03:26 -- common/autotest_common.sh@855 -- # local i 00:34:24.606 09:03:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:34:24.606 09:03:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:34:24.606 09:03:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:34:24.606 09:03:26 -- common/autotest_common.sh@859 -- # break 00:34:24.606 09:03:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:24.606 09:03:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:24.606 09:03:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:24.606 1+0 records in 00:34:24.606 1+0 records out 00:34:24.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118107 s, 3.5 MB/s 00:34:24.606 09:03:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:24.606 09:03:26 -- common/autotest_common.sh@872 -- # size=4096 00:34:24.606 09:03:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:24.606 09:03:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:34:24.606 09:03:26 -- common/autotest_common.sh@875 -- # return 0 00:34:24.606 09:03:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:24.606 09:03:26 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:24.606 09:03:26 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:24.904 09:03:26 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:34:24.905 { 00:34:24.905 "nbd_device": "/dev/nbd0", 00:34:24.905 "bdev_name": "Nvme0n1" 00:34:24.905 }, 00:34:24.905 { 00:34:24.905 "nbd_device": "/dev/nbd1", 00:34:24.905 "bdev_name": "Nvme1n1" 00:34:24.905 }, 00:34:24.905 { 00:34:24.905 "nbd_device": "/dev/nbd2", 00:34:24.905 "bdev_name": "Nvme2n1" 00:34:24.905 }, 00:34:24.905 { 00:34:24.905 "nbd_device": "/dev/nbd3", 00:34:24.905 "bdev_name": "Nvme2n2" 00:34:24.905 }, 00:34:24.905 { 00:34:24.905 "nbd_device": "/dev/nbd4", 00:34:24.905 "bdev_name": "Nvme2n3" 00:34:24.905 }, 00:34:24.905 { 00:34:24.905 "nbd_device": "/dev/nbd5", 00:34:24.905 "bdev_name": "Nvme3n1" 00:34:24.905 } 00:34:24.905 ]' 00:34:24.905 09:03:26 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:34:24.905 09:03:26 -- bdev/nbd_common.sh@119 -- # echo '[ 00:34:24.905 { 00:34:24.905 "nbd_device": "/dev/nbd0", 00:34:24.905 "bdev_name": "Nvme0n1" 00:34:24.905 }, 00:34:24.905 { 00:34:24.905 "nbd_device": "/dev/nbd1", 00:34:24.905 "bdev_name": "Nvme1n1" 00:34:24.905 }, 00:34:24.905 { 00:34:24.905 "nbd_device": "/dev/nbd2", 00:34:24.905 "bdev_name": "Nvme2n1" 00:34:24.905 }, 00:34:24.905 { 00:34:24.905 "nbd_device": "/dev/nbd3", 00:34:24.905 "bdev_name": "Nvme2n2" 00:34:24.905 }, 00:34:24.905 { 00:34:24.905 "nbd_device": "/dev/nbd4", 00:34:24.905 "bdev_name": "Nvme2n3" 00:34:24.905 }, 00:34:24.905 { 00:34:24.905 "nbd_device": "/dev/nbd5", 00:34:24.905 "bdev_name": "Nvme3n1" 00:34:24.905 } 00:34:24.905 ]' 00:34:24.905 09:03:26 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:34:24.905 09:03:26 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:34:24.905 09:03:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:24.905 09:03:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:34:24.905 09:03:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:24.905 09:03:26 -- bdev/nbd_common.sh@51 -- # local i 00:34:24.905 09:03:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:24.905 09:03:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:24.905 09:03:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:24.905 09:03:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:24.905 09:03:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:24.905 09:03:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:24.905 09:03:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:24.905 09:03:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:24.905 09:03:27 -- bdev/nbd_common.sh@41 -- # break 00:34:24.905 09:03:27 -- bdev/nbd_common.sh@45 -- # return 0 00:34:24.905 09:03:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:25.163 09:03:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:34:25.421 09:03:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:25.421 09:03:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:25.421 09:03:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:25.421 09:03:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:25.421 09:03:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:25.421 09:03:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:25.421 09:03:27 -- bdev/nbd_common.sh@41 -- # break 00:34:25.421 09:03:27 -- bdev/nbd_common.sh@45 -- # return 0 00:34:25.421 09:03:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:25.421 09:03:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:34:25.680 09:03:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:34:25.680 09:03:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:34:25.680 09:03:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:34:25.680 09:03:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:25.680 09:03:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:25.680 09:03:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:34:25.680 09:03:27 -- bdev/nbd_common.sh@41 -- # break 00:34:25.680 09:03:27 -- bdev/nbd_common.sh@45 -- # return 0 00:34:25.680 09:03:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:25.680 09:03:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:34:25.938 09:03:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:34:25.938 09:03:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:34:25.938 09:03:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:34:25.938 09:03:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:25.938 09:03:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:25.938 09:03:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:34:25.938 09:03:27 -- bdev/nbd_common.sh@41 -- # break 00:34:25.938 09:03:27 -- bdev/nbd_common.sh@45 -- # return 0 00:34:25.938 09:03:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:25.938 09:03:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:34:26.195 09:03:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:34:26.195 09:03:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:34:26.195 09:03:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:34:26.195 09:03:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:26.195 09:03:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:26.195 09:03:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:34:26.195 09:03:28 -- bdev/nbd_common.sh@41 -- # break 00:34:26.195 09:03:28 -- bdev/nbd_common.sh@45 -- # return 0 00:34:26.195 09:03:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:26.195 09:03:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:34:26.454 09:03:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:34:26.454 09:03:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:34:26.454 09:03:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:34:26.454 09:03:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:26.454 09:03:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:26.454 09:03:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:34:26.454 09:03:28 -- bdev/nbd_common.sh@41 -- # break 00:34:26.454 09:03:28 -- bdev/nbd_common.sh@45 -- # return 0 00:34:26.454 09:03:28 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:26.454 09:03:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:26.454 09:03:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@65 -- # true 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@65 -- # count=0 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@122 -- # count=0 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@127 -- # return 0 00:34:26.712 09:03:28 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@12 -- # local i 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:26.712 09:03:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:34:26.971 /dev/nbd0 00:34:26.971 09:03:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:26.971 09:03:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:26.971 09:03:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:34:26.971 09:03:29 -- common/autotest_common.sh@855 -- # local i 00:34:26.971 09:03:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:34:26.971 09:03:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:34:26.971 09:03:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:34:27.229 09:03:29 -- common/autotest_common.sh@859 -- # break 00:34:27.229 09:03:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:27.229 09:03:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:27.229 09:03:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:27.229 1+0 records in 00:34:27.229 1+0 records out 00:34:27.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000810872 s, 5.1 MB/s 00:34:27.229 09:03:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:27.229 09:03:29 -- common/autotest_common.sh@872 -- # size=4096 00:34:27.229 09:03:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:27.229 09:03:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:34:27.229 09:03:29 -- common/autotest_common.sh@875 -- # return 0 00:34:27.229 09:03:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:27.229 09:03:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:27.229 09:03:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:34:27.548 /dev/nbd1 00:34:27.548 09:03:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:27.548 09:03:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:27.548 09:03:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:34:27.548 09:03:29 -- common/autotest_common.sh@855 -- # local i 00:34:27.548 09:03:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:34:27.548 09:03:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:34:27.548 09:03:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:34:27.548 09:03:29 -- common/autotest_common.sh@859 -- # break 00:34:27.548 09:03:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:27.548 09:03:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:27.548 09:03:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:27.548 1+0 records in 00:34:27.548 1+0 records out 00:34:27.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000969764 s, 4.2 MB/s 00:34:27.548 09:03:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:27.548 09:03:29 -- common/autotest_common.sh@872 -- # size=4096 00:34:27.548 09:03:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:27.548 09:03:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:34:27.548 09:03:29 -- common/autotest_common.sh@875 -- # return 0 00:34:27.548 09:03:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:27.548 09:03:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:27.548 09:03:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:34:27.822 /dev/nbd10 00:34:27.822 09:03:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:34:27.822 09:03:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:34:27.822 09:03:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:34:27.822 09:03:29 -- common/autotest_common.sh@855 -- # local i 00:34:27.822 09:03:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:34:27.822 09:03:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:34:27.822 09:03:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:34:27.822 09:03:29 -- common/autotest_common.sh@859 -- # break 00:34:27.822 09:03:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:27.822 09:03:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:27.822 09:03:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:27.822 1+0 records in 00:34:27.822 1+0 records out 00:34:27.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000700344 s, 5.8 MB/s 00:34:27.822 09:03:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:27.822 09:03:29 -- common/autotest_common.sh@872 -- # size=4096 00:34:27.822 09:03:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:27.822 09:03:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:34:27.822 09:03:29 -- common/autotest_common.sh@875 -- # return 0 00:34:27.822 09:03:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:27.822 09:03:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:27.822 09:03:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:34:28.080 /dev/nbd11 00:34:28.080 09:03:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:34:28.080 09:03:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:34:28.080 09:03:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:34:28.080 09:03:30 -- common/autotest_common.sh@855 -- # local i 00:34:28.080 09:03:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:34:28.080 09:03:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:34:28.080 09:03:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:34:28.080 09:03:30 -- common/autotest_common.sh@859 -- # break 00:34:28.080 09:03:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:28.080 09:03:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:28.080 09:03:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:28.080 1+0 records in 00:34:28.080 1+0 records out 00:34:28.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000942521 s, 4.3 MB/s 00:34:28.080 09:03:30 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:28.080 09:03:30 -- common/autotest_common.sh@872 -- # size=4096 00:34:28.080 09:03:30 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:28.080 09:03:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:34:28.080 09:03:30 -- common/autotest_common.sh@875 -- # return 0 00:34:28.080 09:03:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:28.080 09:03:30 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:28.080 09:03:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:34:28.339 /dev/nbd12 00:34:28.339 09:03:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:34:28.339 09:03:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:34:28.339 09:03:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:34:28.339 09:03:30 -- common/autotest_common.sh@855 -- # local i 00:34:28.339 09:03:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:34:28.339 09:03:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:34:28.339 09:03:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:34:28.339 09:03:30 -- common/autotest_common.sh@859 -- # break 00:34:28.339 09:03:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:28.339 09:03:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:28.339 09:03:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:28.339 1+0 records in 00:34:28.339 1+0 records out 00:34:28.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000870299 s, 4.7 MB/s 00:34:28.339 09:03:30 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:28.339 09:03:30 -- common/autotest_common.sh@872 -- # size=4096 00:34:28.339 09:03:30 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:28.339 09:03:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:34:28.339 09:03:30 -- common/autotest_common.sh@875 -- # return 0 00:34:28.339 09:03:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:28.339 09:03:30 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:28.339 09:03:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:34:28.598 /dev/nbd13 00:34:28.598 09:03:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:34:28.598 09:03:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:34:28.598 09:03:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:34:28.598 09:03:30 -- common/autotest_common.sh@855 -- # local i 00:34:28.598 09:03:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:34:28.598 09:03:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:34:28.598 09:03:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:34:28.598 09:03:30 -- common/autotest_common.sh@859 -- # break 00:34:28.598 09:03:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:28.598 09:03:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:28.598 09:03:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:28.598 1+0 records in 00:34:28.598 1+0 records out 00:34:28.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00159588 s, 2.6 MB/s 00:34:28.598 09:03:30 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:28.598 09:03:30 -- common/autotest_common.sh@872 -- # size=4096 00:34:28.598 09:03:30 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:28.598 09:03:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:34:28.598 09:03:30 -- common/autotest_common.sh@875 -- # return 0 00:34:28.598 09:03:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:28.598 09:03:30 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:28.598 09:03:30 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:28.598 09:03:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:28.598 09:03:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:29.167 09:03:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:34:29.167 { 00:34:29.167 "nbd_device": "/dev/nbd0", 00:34:29.167 "bdev_name": "Nvme0n1" 00:34:29.167 }, 00:34:29.167 { 00:34:29.167 "nbd_device": "/dev/nbd1", 00:34:29.167 "bdev_name": "Nvme1n1" 00:34:29.167 }, 00:34:29.167 { 00:34:29.167 "nbd_device": "/dev/nbd10", 00:34:29.167 "bdev_name": "Nvme2n1" 00:34:29.167 }, 00:34:29.167 { 00:34:29.167 "nbd_device": "/dev/nbd11", 00:34:29.167 "bdev_name": "Nvme2n2" 00:34:29.167 }, 00:34:29.167 { 00:34:29.167 "nbd_device": "/dev/nbd12", 00:34:29.167 "bdev_name": "Nvme2n3" 00:34:29.167 }, 00:34:29.167 { 00:34:29.167 "nbd_device": "/dev/nbd13", 00:34:29.167 "bdev_name": "Nvme3n1" 00:34:29.167 } 00:34:29.167 ]' 00:34:29.167 09:03:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:29.167 09:03:30 -- bdev/nbd_common.sh@64 -- # echo '[ 00:34:29.167 { 00:34:29.167 "nbd_device": "/dev/nbd0", 00:34:29.167 "bdev_name": "Nvme0n1" 00:34:29.167 }, 00:34:29.167 { 00:34:29.167 "nbd_device": "/dev/nbd1", 00:34:29.167 "bdev_name": "Nvme1n1" 00:34:29.167 }, 00:34:29.167 { 00:34:29.167 "nbd_device": "/dev/nbd10", 00:34:29.167 "bdev_name": "Nvme2n1" 00:34:29.167 }, 00:34:29.167 { 00:34:29.167 "nbd_device": "/dev/nbd11", 00:34:29.167 "bdev_name": "Nvme2n2" 00:34:29.167 }, 00:34:29.167 { 00:34:29.167 "nbd_device": "/dev/nbd12", 00:34:29.167 "bdev_name": "Nvme2n3" 00:34:29.167 }, 00:34:29.167 { 00:34:29.167 "nbd_device": "/dev/nbd13", 00:34:29.167 "bdev_name": "Nvme3n1" 00:34:29.167 } 00:34:29.167 ]' 00:34:29.167 09:03:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:34:29.167 /dev/nbd1 00:34:29.167 /dev/nbd10 00:34:29.167 /dev/nbd11 00:34:29.167 /dev/nbd12 00:34:29.167 /dev/nbd13' 00:34:29.167 09:03:31 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:34:29.167 /dev/nbd1 00:34:29.167 /dev/nbd10 00:34:29.167 /dev/nbd11 00:34:29.167 /dev/nbd12 00:34:29.167 /dev/nbd13' 00:34:29.167 09:03:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:29.167 09:03:31 -- bdev/nbd_common.sh@65 -- # count=6 00:34:29.167 09:03:31 -- bdev/nbd_common.sh@66 -- # echo 6 00:34:29.167 09:03:31 -- bdev/nbd_common.sh@95 -- # count=6 00:34:29.167 09:03:31 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:34:29.167 09:03:31 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:34:29.167 09:03:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:29.167 09:03:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:29.167 09:03:31 -- bdev/nbd_common.sh@71 -- # local operation=write 00:34:29.167 09:03:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:29.167 09:03:31 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:34:29.167 09:03:31 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:34:29.167 256+0 records in 00:34:29.167 256+0 records out 00:34:29.167 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00941975 s, 111 MB/s 00:34:29.167 09:03:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:29.167 09:03:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:34:29.167 256+0 records in 00:34:29.167 256+0 records out 00:34:29.167 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126041 s, 8.3 MB/s 00:34:29.167 09:03:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:29.167 09:03:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:34:29.427 256+0 records in 00:34:29.427 256+0 records out 00:34:29.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134249 s, 7.8 MB/s 00:34:29.427 09:03:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:29.427 09:03:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:34:29.427 256+0 records in 00:34:29.427 256+0 records out 00:34:29.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13499 s, 7.8 MB/s 00:34:29.427 09:03:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:29.427 09:03:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:34:29.686 256+0 records in 00:34:29.686 256+0 records out 00:34:29.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140271 s, 7.5 MB/s 00:34:29.686 09:03:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:29.686 09:03:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:34:29.686 256+0 records in 00:34:29.686 256+0 records out 00:34:29.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136063 s, 7.7 MB/s 00:34:29.686 09:03:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:29.686 09:03:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:34:29.945 256+0 records in 00:34:29.945 256+0 records out 00:34:29.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139984 s, 7.5 MB/s 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@51 -- # local i 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:29.945 09:03:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:30.204 09:03:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:30.204 09:03:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:30.204 09:03:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:30.204 09:03:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:30.204 09:03:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:30.204 09:03:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:30.204 09:03:32 -- bdev/nbd_common.sh@41 -- # break 00:34:30.204 09:03:32 -- bdev/nbd_common.sh@45 -- # return 0 00:34:30.204 09:03:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:30.204 09:03:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:34:30.462 09:03:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:30.462 09:03:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:30.463 09:03:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:30.463 09:03:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:30.463 09:03:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:30.463 09:03:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:30.463 09:03:32 -- bdev/nbd_common.sh@41 -- # break 00:34:30.463 09:03:32 -- bdev/nbd_common.sh@45 -- # return 0 00:34:30.463 09:03:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:30.463 09:03:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:34:30.721 09:03:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:34:30.721 09:03:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:34:30.721 09:03:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:34:30.721 09:03:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:30.721 09:03:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:30.721 09:03:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:34:30.721 09:03:32 -- bdev/nbd_common.sh@41 -- # break 00:34:30.721 09:03:32 -- bdev/nbd_common.sh@45 -- # return 0 00:34:30.721 09:03:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:30.721 09:03:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:34:30.979 09:03:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:34:30.980 09:03:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:34:30.980 09:03:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:34:30.980 09:03:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:30.980 09:03:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:30.980 09:03:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:34:30.980 09:03:33 -- bdev/nbd_common.sh@41 -- # break 00:34:30.980 09:03:33 -- bdev/nbd_common.sh@45 -- # return 0 00:34:30.980 09:03:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:30.980 09:03:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:34:31.240 09:03:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:34:31.240 09:03:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:34:31.240 09:03:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:34:31.240 09:03:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:31.240 09:03:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:31.240 09:03:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:34:31.240 09:03:33 -- bdev/nbd_common.sh@41 -- # break 00:34:31.240 09:03:33 -- bdev/nbd_common.sh@45 -- # return 0 00:34:31.240 09:03:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:31.240 09:03:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:34:31.500 09:03:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:34:31.500 09:03:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:34:31.500 09:03:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:34:31.500 09:03:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:31.500 09:03:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:31.500 09:03:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:34:31.500 09:03:33 -- bdev/nbd_common.sh@41 -- # break 00:34:31.500 09:03:33 -- bdev/nbd_common.sh@45 -- # return 0 00:34:31.500 09:03:33 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:31.500 09:03:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:31.500 09:03:33 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:31.759 09:03:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:31.759 09:03:33 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:31.759 09:03:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:31.759 09:03:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:31.759 09:03:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:34:31.759 09:03:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:31.759 09:03:33 -- bdev/nbd_common.sh@65 -- # true 00:34:31.759 09:03:33 -- bdev/nbd_common.sh@65 -- # count=0 00:34:31.759 09:03:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:34:31.759 09:03:33 -- bdev/nbd_common.sh@104 -- # count=0 00:34:31.759 09:03:33 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:34:31.759 09:03:33 -- bdev/nbd_common.sh@109 -- # return 0 00:34:31.759 09:03:33 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:34:31.759 09:03:33 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:31.759 09:03:33 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:31.759 09:03:33 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:34:31.759 09:03:33 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:34:31.759 09:03:33 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:34:32.017 malloc_lvol_verify 00:34:32.017 09:03:34 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:34:32.274 83c87647-e56e-4771-bc24-5c11ffc0ba5e 00:34:32.274 09:03:34 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:34:32.534 7971285b-bacc-442c-b811-4eccd2550eb1 00:34:32.534 09:03:34 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:34:32.793 /dev/nbd0 00:34:32.793 09:03:34 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:34:32.793 mke2fs 1.46.5 (30-Dec-2021) 00:34:32.793 Discarding device blocks: 0/4096 done 00:34:32.793 Creating filesystem with 4096 1k blocks and 1024 inodes 00:34:32.793 00:34:32.793 Allocating group tables: 0/1 done 00:34:32.793 Writing inode tables: 0/1 done 00:34:32.793 Creating journal (1024 blocks): done 00:34:32.793 Writing superblocks and filesystem accounting information: 0/1 done 00:34:32.793 00:34:32.793 09:03:34 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:34:32.793 09:03:34 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:32.793 09:03:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:32.793 09:03:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:32.793 09:03:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:32.793 09:03:34 -- bdev/nbd_common.sh@51 -- # local i 00:34:32.793 09:03:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:32.793 09:03:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:33.052 09:03:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:33.052 09:03:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:33.052 09:03:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:33.052 09:03:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:33.052 09:03:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:33.052 09:03:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:33.052 09:03:35 -- bdev/nbd_common.sh@41 -- # break 00:34:33.052 09:03:35 -- bdev/nbd_common.sh@45 -- # return 0 00:34:33.052 09:03:35 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:34:33.052 09:03:35 -- bdev/nbd_common.sh@147 -- # return 0 00:34:33.052 09:03:35 -- bdev/blockdev.sh@326 -- # killprocess 66717 00:34:33.052 09:03:35 -- common/autotest_common.sh@936 -- # '[' -z 66717 ']' 00:34:33.052 09:03:35 -- common/autotest_common.sh@940 -- # kill -0 66717 00:34:33.052 09:03:35 -- common/autotest_common.sh@941 -- # uname 00:34:33.052 09:03:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:33.052 09:03:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66717 00:34:33.052 09:03:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:33.052 09:03:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:33.052 09:03:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66717' 00:34:33.052 killing process with pid 66717 00:34:33.052 09:03:35 -- common/autotest_common.sh@955 -- # kill 66717 00:34:33.052 09:03:35 -- common/autotest_common.sh@960 -- # wait 66717 00:34:34.979 ************************************ 00:34:34.979 END TEST bdev_nbd 00:34:34.979 ************************************ 00:34:34.979 09:03:36 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:34:34.979 00:34:34.979 real 0m13.451s 00:34:34.979 user 0m17.688s 00:34:34.979 sys 0m5.022s 00:34:34.979 09:03:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:34.979 09:03:36 -- common/autotest_common.sh@10 -- # set +x 00:34:34.979 09:03:36 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:34:34.979 09:03:36 -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:34:34.979 skipping fio tests on NVMe due to multi-ns failures. 00:34:34.979 09:03:36 -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:34:34.979 09:03:36 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:34.979 09:03:36 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:34.979 09:03:36 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:34:34.979 09:03:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:34.979 09:03:36 -- common/autotest_common.sh@10 -- # set +x 00:34:34.979 ************************************ 00:34:34.979 START TEST bdev_verify 00:34:34.979 ************************************ 00:34:34.979 09:03:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:34.979 [2024-04-18 09:03:36.919403] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:34:34.979 [2024-04-18 09:03:36.919578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67124 ] 00:34:35.238 [2024-04-18 09:03:37.110236] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:35.497 [2024-04-18 09:03:37.421931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.497 [2024-04-18 09:03:37.421957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:36.432 Running I/O for 5 seconds... 00:34:41.700 00:34:41.700 Latency(us) 00:34:41.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.700 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:41.700 Verification LBA range: start 0x0 length 0xbd0bd 00:34:41.700 Nvme0n1 : 5.08 1422.33 5.56 0.00 0.00 89608.44 7458.62 93373.20 00:34:41.700 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:41.700 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:34:41.700 Nvme0n1 : 5.05 1443.75 5.64 0.00 0.00 88336.61 16477.62 81888.79 00:34:41.700 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:41.700 Verification LBA range: start 0x0 length 0xa0000 00:34:41.700 Nvme1n1 : 5.09 1421.76 5.55 0.00 0.00 89438.29 8363.64 90377.26 00:34:41.700 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:41.700 Verification LBA range: start 0xa0000 length 0xa0000 00:34:41.700 Nvme1n1 : 5.05 1443.32 5.64 0.00 0.00 88237.57 17975.59 80390.83 00:34:41.700 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:41.700 Verification LBA range: start 0x0 length 0x80000 00:34:41.700 Nvme2n1 : 5.09 1421.30 5.55 0.00 0.00 89292.39 8675.72 87381.33 00:34:41.700 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:41.700 Verification LBA range: start 0x80000 length 0x80000 00:34:41.700 Nvme2n1 : 5.06 1442.92 5.64 0.00 0.00 88012.93 17101.78 78393.54 00:34:41.700 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:41.700 Verification LBA range: start 0x0 length 0x80000 00:34:41.700 Nvme2n2 : 5.10 1431.08 5.59 0.00 0.00 88675.33 7177.75 84884.72 00:34:41.700 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:41.700 Verification LBA range: start 0x80000 length 0x80000 00:34:41.700 Nvme2n2 : 5.07 1451.44 5.67 0.00 0.00 87405.22 5398.92 76396.25 00:34:41.700 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:41.700 Verification LBA range: start 0x0 length 0x80000 00:34:41.700 Nvme2n3 : 5.10 1430.64 5.59 0.00 0.00 88522.87 7427.41 87381.33 00:34:41.700 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:41.700 Verification LBA range: start 0x80000 length 0x80000 00:34:41.700 Nvme2n3 : 5.07 1450.89 5.67 0.00 0.00 87265.08 5648.58 78892.86 00:34:41.700 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:41.700 Verification LBA range: start 0x0 length 0x20000 00:34:41.700 Nvme3n1 : 5.10 1430.21 5.59 0.00 0.00 88369.45 7677.07 91375.91 00:34:41.700 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:41.700 Verification LBA range: start 0x20000 length 0x20000 00:34:41.700 Nvme3n1 : 5.08 1460.31 5.70 0.00 0.00 86628.37 7489.83 81888.79 00:34:41.700 =================================================================================================================== 00:34:41.700 Total : 17249.94 67.38 0.00 0.00 88309.67 5398.92 93373.20 00:34:43.096 00:34:43.096 real 0m8.312s 00:34:43.096 user 0m14.882s 00:34:43.096 sys 0m0.350s 00:34:43.096 09:03:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:43.096 09:03:45 -- common/autotest_common.sh@10 -- # set +x 00:34:43.096 ************************************ 00:34:43.096 END TEST bdev_verify 00:34:43.096 ************************************ 00:34:43.096 09:03:45 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:43.096 09:03:45 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:34:43.096 09:03:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:43.096 09:03:45 -- common/autotest_common.sh@10 -- # set +x 00:34:43.353 ************************************ 00:34:43.353 START TEST bdev_verify_big_io 00:34:43.353 ************************************ 00:34:43.354 09:03:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:43.354 [2024-04-18 09:03:45.370756] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:34:43.354 [2024-04-18 09:03:45.370961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67237 ] 00:34:43.612 [2024-04-18 09:03:45.551361] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:43.871 [2024-04-18 09:03:45.896842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:43.871 [2024-04-18 09:03:45.896856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:44.805 Running I/O for 5 seconds... 00:34:51.394 00:34:51.394 Latency(us) 00:34:51.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:51.394 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:51.394 Verification LBA range: start 0x0 length 0xbd0b 00:34:51.394 Nvme0n1 : 5.70 123.41 7.71 0.00 0.00 1002082.77 27837.20 1038589.56 00:34:51.394 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:51.394 Verification LBA range: start 0xbd0b length 0xbd0b 00:34:51.394 Nvme0n1 : 5.94 134.92 8.43 0.00 0.00 809310.26 41194.06 830871.65 00:34:51.394 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:51.394 Verification LBA range: start 0x0 length 0xa000 00:34:51.394 Nvme1n1 : 5.83 128.30 8.02 0.00 0.00 943086.90 77894.22 886795.70 00:34:51.394 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:51.394 Verification LBA range: start 0xa000 length 0xa000 00:34:51.394 Nvme1n1 : 5.96 142.06 8.88 0.00 0.00 753678.49 1927.07 1805548.01 00:34:51.394 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:51.394 Verification LBA range: start 0x0 length 0x8000 00:34:51.394 Nvme2n1 : 5.83 126.89 7.93 0.00 0.00 918997.24 76895.57 902774.00 00:34:51.394 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:51.394 Verification LBA range: start 0x8000 length 0x8000 00:34:51.394 Nvme2n1 : 5.85 126.73 7.92 0.00 0.00 976906.46 21221.18 1038589.56 00:34:51.394 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:51.394 Verification LBA range: start 0x0 length 0x8000 00:34:51.394 Nvme2n2 : 5.83 131.63 8.23 0.00 0.00 866218.34 43441.01 922746.88 00:34:51.394 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:51.394 Verification LBA range: start 0x8000 length 0x8000 00:34:51.394 Nvme2n2 : 5.85 126.53 7.91 0.00 0.00 947081.06 82388.11 882801.13 00:34:51.394 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:51.394 Verification LBA range: start 0x0 length 0x8000 00:34:51.394 Nvme2n3 : 5.89 135.07 8.44 0.00 0.00 816101.30 44689.31 950708.91 00:34:51.394 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:51.394 Verification LBA range: start 0x8000 length 0x8000 00:34:51.394 Nvme2n3 : 5.80 126.93 7.93 0.00 0.00 918611.87 82388.11 758969.30 00:34:51.394 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:51.394 Verification LBA range: start 0x0 length 0x2000 00:34:51.394 Nvme3n1 : 5.96 150.33 9.40 0.00 0.00 714227.01 10922.67 978670.93 00:34:51.394 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:51.394 Verification LBA range: start 0x2000 length 0x2000 00:34:51.394 Nvme3n1 : 5.85 131.25 8.20 0.00 0.00 863763.34 49183.21 790925.90 00:34:51.394 =================================================================================================================== 00:34:51.394 Total : 1584.05 99.00 0.00 0.00 872026.57 1927.07 1805548.01 00:34:52.769 00:34:52.769 real 0m9.424s 00:34:52.769 user 0m17.084s 00:34:52.769 sys 0m0.363s 00:34:52.769 09:03:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:52.769 09:03:54 -- common/autotest_common.sh@10 -- # set +x 00:34:52.769 ************************************ 00:34:52.769 END TEST bdev_verify_big_io 00:34:52.769 ************************************ 00:34:52.769 09:03:54 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:52.769 09:03:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:34:52.769 09:03:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:52.769 09:03:54 -- common/autotest_common.sh@10 -- # set +x 00:34:52.769 ************************************ 00:34:52.769 START TEST bdev_write_zeroes 00:34:52.769 ************************************ 00:34:52.769 09:03:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:53.027 [2024-04-18 09:03:54.916205] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:34:53.027 [2024-04-18 09:03:54.916398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67360 ] 00:34:53.027 [2024-04-18 09:03:55.096342] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.286 [2024-04-18 09:03:55.362860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:54.220 Running I/O for 1 seconds... 00:34:55.153 00:34:55.153 Latency(us) 00:34:55.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.153 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:55.153 Nvme0n1 : 1.02 8313.85 32.48 0.00 0.00 15341.39 11796.48 24217.11 00:34:55.153 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:55.153 Nvme1n1 : 1.02 8301.05 32.43 0.00 0.00 15341.10 12170.97 24841.26 00:34:55.153 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:55.153 Nvme2n1 : 1.02 8287.98 32.37 0.00 0.00 15313.65 12046.14 23343.30 00:34:55.153 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:55.153 Nvme2n2 : 1.02 8325.89 32.52 0.00 0.00 15214.29 9175.04 21970.16 00:34:55.153 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:55.153 Nvme2n3 : 1.02 8313.30 32.47 0.00 0.00 15206.03 9549.53 22094.99 00:34:55.153 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:55.153 Nvme3n1 : 1.03 8300.70 32.42 0.00 0.00 15192.62 9112.62 22344.66 00:34:55.153 =================================================================================================================== 00:34:55.153 Total : 49842.78 194.70 0.00 0.00 15267.94 9112.62 24841.26 00:34:57.052 00:34:57.052 real 0m3.906s 00:34:57.052 user 0m3.477s 00:34:57.052 sys 0m0.299s 00:34:57.052 09:03:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:57.052 09:03:58 -- common/autotest_common.sh@10 -- # set +x 00:34:57.052 ************************************ 00:34:57.052 END TEST bdev_write_zeroes 00:34:57.052 ************************************ 00:34:57.052 09:03:58 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:57.052 09:03:58 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:34:57.052 09:03:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:57.052 09:03:58 -- common/autotest_common.sh@10 -- # set +x 00:34:57.052 ************************************ 00:34:57.052 START TEST bdev_json_nonenclosed 00:34:57.052 ************************************ 00:34:57.052 09:03:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:57.052 [2024-04-18 09:03:58.952074] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:34:57.052 [2024-04-18 09:03:58.952247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67424 ] 00:34:57.052 [2024-04-18 09:03:59.142150] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.622 [2024-04-18 09:03:59.452545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:57.622 [2024-04-18 09:03:59.452644] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:34:57.622 [2024-04-18 09:03:59.452667] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:57.622 [2024-04-18 09:03:59.452681] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:57.882 00:34:57.882 real 0m1.128s 00:34:57.882 user 0m0.834s 00:34:57.882 sys 0m0.184s 00:34:57.882 09:03:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:57.882 09:03:59 -- common/autotest_common.sh@10 -- # set +x 00:34:57.882 ************************************ 00:34:57.882 END TEST bdev_json_nonenclosed 00:34:57.882 ************************************ 00:34:58.140 09:04:00 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:58.140 09:04:00 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:34:58.140 09:04:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:58.140 09:04:00 -- common/autotest_common.sh@10 -- # set +x 00:34:58.140 ************************************ 00:34:58.140 START TEST bdev_json_nonarray 00:34:58.140 ************************************ 00:34:58.140 09:04:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:58.140 [2024-04-18 09:04:00.186627] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:34:58.140 [2024-04-18 09:04:00.186784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67459 ] 00:34:58.398 [2024-04-18 09:04:00.352765] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.657 [2024-04-18 09:04:00.598362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:58.657 [2024-04-18 09:04:00.598472] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:34:58.657 [2024-04-18 09:04:00.598504] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:58.657 [2024-04-18 09:04:00.598518] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:59.225 00:34:59.225 real 0m1.003s 00:34:59.225 user 0m0.744s 00:34:59.225 sys 0m0.151s 00:34:59.225 09:04:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:59.225 09:04:01 -- common/autotest_common.sh@10 -- # set +x 00:34:59.225 ************************************ 00:34:59.225 END TEST bdev_json_nonarray 00:34:59.225 ************************************ 00:34:59.225 09:04:01 -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:34:59.225 09:04:01 -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:34:59.225 09:04:01 -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:34:59.225 09:04:01 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:34:59.225 09:04:01 -- bdev/blockdev.sh@811 -- # cleanup 00:34:59.225 09:04:01 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:34:59.225 09:04:01 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:59.225 09:04:01 -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:34:59.225 09:04:01 -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:34:59.225 09:04:01 -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:34:59.225 09:04:01 -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:34:59.225 00:34:59.225 real 0m49.743s 00:34:59.225 user 1m11.090s 00:34:59.225 sys 0m8.407s 00:34:59.225 09:04:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:59.225 09:04:01 -- common/autotest_common.sh@10 -- # set +x 00:34:59.225 ************************************ 00:34:59.225 END TEST blockdev_nvme 00:34:59.225 ************************************ 00:34:59.225 09:04:01 -- spdk/autotest.sh@209 -- # uname -s 00:34:59.225 09:04:01 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:34:59.225 09:04:01 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:34:59.225 09:04:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:34:59.225 09:04:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:59.225 09:04:01 -- common/autotest_common.sh@10 -- # set +x 00:34:59.225 ************************************ 00:34:59.225 START TEST blockdev_nvme_gpt 00:34:59.225 ************************************ 00:34:59.225 09:04:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:34:59.483 * Looking for test storage... 00:34:59.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:34:59.483 09:04:01 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:34:59.483 09:04:01 -- bdev/nbd_common.sh@6 -- # set -e 00:34:59.483 09:04:01 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:34:59.483 09:04:01 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:59.483 09:04:01 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:34:59.483 09:04:01 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:34:59.483 09:04:01 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:34:59.483 09:04:01 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:34:59.483 09:04:01 -- bdev/blockdev.sh@20 -- # : 00:34:59.483 09:04:01 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:34:59.483 09:04:01 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:34:59.483 09:04:01 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:34:59.483 09:04:01 -- bdev/blockdev.sh@674 -- # uname -s 00:34:59.483 09:04:01 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:34:59.483 09:04:01 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:34:59.483 09:04:01 -- bdev/blockdev.sh@682 -- # test_type=gpt 00:34:59.483 09:04:01 -- bdev/blockdev.sh@683 -- # crypto_device= 00:34:59.483 09:04:01 -- bdev/blockdev.sh@684 -- # dek= 00:34:59.483 09:04:01 -- bdev/blockdev.sh@685 -- # env_ctx= 00:34:59.483 09:04:01 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:34:59.483 09:04:01 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:34:59.483 09:04:01 -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:34:59.483 09:04:01 -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:34:59.483 09:04:01 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:34:59.483 09:04:01 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67551 00:34:59.483 09:04:01 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:34:59.483 09:04:01 -- bdev/blockdev.sh@49 -- # waitforlisten 67551 00:34:59.483 09:04:01 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:34:59.483 09:04:01 -- common/autotest_common.sh@817 -- # '[' -z 67551 ']' 00:34:59.484 09:04:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:59.484 09:04:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:59.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:59.484 09:04:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:59.484 09:04:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:59.484 09:04:01 -- common/autotest_common.sh@10 -- # set +x 00:34:59.484 [2024-04-18 09:04:01.560560] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:34:59.484 [2024-04-18 09:04:01.560787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67551 ] 00:34:59.743 [2024-04-18 09:04:01.771958] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.002 [2024-04-18 09:04:02.019727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:01.378 09:04:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:01.378 09:04:03 -- common/autotest_common.sh@850 -- # return 0 00:35:01.378 09:04:03 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:35:01.378 09:04:03 -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:35:01.378 09:04:03 -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:01.378 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:01.637 Waiting for block devices as requested 00:35:01.637 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:01.896 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:01.896 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:02.154 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:35:07.544 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:35:07.544 09:04:09 -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:35:07.544 09:04:09 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:35:07.544 09:04:09 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:35:07.544 09:04:09 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:35:07.544 09:04:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:07.544 09:04:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:35:07.544 09:04:09 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:07.544 09:04:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:07.544 09:04:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:07.544 09:04:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:07.544 09:04:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:35:07.544 09:04:09 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:35:07.544 09:04:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:35:07.544 09:04:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:07.544 09:04:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:07.544 09:04:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:35:07.544 09:04:09 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:35:07.544 09:04:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:35:07.544 09:04:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:07.544 09:04:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:07.544 09:04:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:35:07.544 09:04:09 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:35:07.544 09:04:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:35:07.544 09:04:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:07.544 09:04:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:07.544 09:04:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:35:07.544 09:04:09 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:35:07.544 09:04:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:35:07.544 09:04:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:07.544 09:04:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:07.544 09:04:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:35:07.544 09:04:09 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:35:07.544 09:04:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:35:07.544 09:04:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:07.544 09:04:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:07.544 09:04:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:35:07.544 09:04:09 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:35:07.544 09:04:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:35:07.544 09:04:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:07.544 09:04:09 -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:35:07.544 09:04:09 -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:35:07.544 09:04:09 -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:35:07.544 09:04:09 -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:35:07.544 09:04:09 -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:35:07.544 09:04:09 -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:35:07.544 09:04:09 -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:35:07.544 09:04:09 -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:35:07.544 BYT; 00:35:07.544 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:35:07.544 09:04:09 -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:35:07.544 BYT; 00:35:07.544 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:35:07.544 09:04:09 -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:35:07.544 09:04:09 -- bdev/blockdev.sh@116 -- # break 00:35:07.544 09:04:09 -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:35:07.544 09:04:09 -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:35:07.544 09:04:09 -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:35:07.544 09:04:09 -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:35:07.544 09:04:09 -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:35:07.544 09:04:09 -- scripts/common.sh@408 -- # local spdk_guid 00:35:07.544 09:04:09 -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:35:07.544 09:04:09 -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:35:07.544 09:04:09 -- scripts/common.sh@413 -- # IFS='()' 00:35:07.544 09:04:09 -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:35:07.544 09:04:09 -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:35:07.544 09:04:09 -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:35:07.544 09:04:09 -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:35:07.544 09:04:09 -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:35:07.544 09:04:09 -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:35:07.544 09:04:09 -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:35:07.544 09:04:09 -- scripts/common.sh@420 -- # local spdk_guid 00:35:07.544 09:04:09 -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:35:07.544 09:04:09 -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:35:07.544 09:04:09 -- scripts/common.sh@425 -- # IFS='()' 00:35:07.544 09:04:09 -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:35:07.544 09:04:09 -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:35:07.544 09:04:09 -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:35:07.544 09:04:09 -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:35:07.544 09:04:09 -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:35:07.544 09:04:09 -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:35:07.544 09:04:09 -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:35:08.541 The operation has completed successfully. 00:35:08.541 09:04:10 -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:35:09.476 The operation has completed successfully. 00:35:09.476 09:04:11 -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:10.044 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:10.611 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:35:10.611 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:35:10.611 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:35:10.611 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:35:10.872 09:04:12 -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:35:10.872 09:04:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:10.872 09:04:12 -- common/autotest_common.sh@10 -- # set +x 00:35:10.872 [] 00:35:10.872 09:04:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:10.872 09:04:12 -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:35:10.872 09:04:12 -- bdev/blockdev.sh@81 -- # local json 00:35:10.872 09:04:12 -- bdev/blockdev.sh@82 -- # mapfile -t json 00:35:10.872 09:04:12 -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:35:10.872 09:04:12 -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:35:10.872 09:04:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:10.872 09:04:12 -- common/autotest_common.sh@10 -- # set +x 00:35:11.130 09:04:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:11.130 09:04:13 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:35:11.130 09:04:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:11.130 09:04:13 -- common/autotest_common.sh@10 -- # set +x 00:35:11.130 09:04:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:11.130 09:04:13 -- bdev/blockdev.sh@740 -- # cat 00:35:11.130 09:04:13 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:35:11.130 09:04:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:11.130 09:04:13 -- common/autotest_common.sh@10 -- # set +x 00:35:11.130 09:04:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:11.130 09:04:13 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:35:11.130 09:04:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:11.130 09:04:13 -- common/autotest_common.sh@10 -- # set +x 00:35:11.389 09:04:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:11.389 09:04:13 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:35:11.389 09:04:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:11.389 09:04:13 -- common/autotest_common.sh@10 -- # set +x 00:35:11.389 09:04:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:11.389 09:04:13 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:35:11.389 09:04:13 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:35:11.389 09:04:13 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:35:11.389 09:04:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:11.389 09:04:13 -- common/autotest_common.sh@10 -- # set +x 00:35:11.389 09:04:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:11.389 09:04:13 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:35:11.389 09:04:13 -- bdev/blockdev.sh@749 -- # jq -r .name 00:35:11.390 09:04:13 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "71596f98-c961-4aa3-bc8d-c7d28760e0ba"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "71596f98-c961-4aa3-bc8d-c7d28760e0ba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f2e58fef-f8ac-445c-a60f-1958e40b86d0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f2e58fef-f8ac-445c-a60f-1958e40b86d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "862c9ccb-66bb-47a2-b6e8-4cfd39c6ec53"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "862c9ccb-66bb-47a2-b6e8-4cfd39c6ec53",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c100efb8-fbb7-48ae-8c1c-d6d27157b9f5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c100efb8-fbb7-48ae-8c1c-d6d27157b9f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "09f500ae-97fc-4655-b6b1-86efa3130bd4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "09f500ae-97fc-4655-b6b1-86efa3130bd4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:35:11.390 09:04:13 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:35:11.390 09:04:13 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:35:11.390 09:04:13 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:35:11.390 09:04:13 -- bdev/blockdev.sh@754 -- # killprocess 67551 00:35:11.390 09:04:13 -- common/autotest_common.sh@936 -- # '[' -z 67551 ']' 00:35:11.390 09:04:13 -- common/autotest_common.sh@940 -- # kill -0 67551 00:35:11.390 09:04:13 -- common/autotest_common.sh@941 -- # uname 00:35:11.390 09:04:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:11.390 09:04:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67551 00:35:11.390 09:04:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:11.390 killing process with pid 67551 00:35:11.390 09:04:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:11.390 09:04:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67551' 00:35:11.390 09:04:13 -- common/autotest_common.sh@955 -- # kill 67551 00:35:11.390 09:04:13 -- common/autotest_common.sh@960 -- # wait 67551 00:35:14.676 09:04:16 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:14.676 09:04:16 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:35:14.676 09:04:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:35:14.676 09:04:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:14.676 09:04:16 -- common/autotest_common.sh@10 -- # set +x 00:35:14.676 ************************************ 00:35:14.676 START TEST bdev_hello_world 00:35:14.676 ************************************ 00:35:14.676 09:04:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:35:14.676 [2024-04-18 09:04:16.378651] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:35:14.676 [2024-04-18 09:04:16.378814] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68195 ] 00:35:14.676 [2024-04-18 09:04:16.565245] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.934 [2024-04-18 09:04:16.846532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:15.870 [2024-04-18 09:04:17.602039] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:35:15.870 [2024-04-18 09:04:17.602098] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:35:15.870 [2024-04-18 09:04:17.602144] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:35:15.871 [2024-04-18 09:04:17.605604] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:35:15.871 [2024-04-18 09:04:17.606329] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:35:15.871 [2024-04-18 09:04:17.606382] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:35:15.871 [2024-04-18 09:04:17.606573] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:35:15.871 00:35:15.871 [2024-04-18 09:04:17.606606] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:35:17.270 00:35:17.270 real 0m2.905s 00:35:17.270 user 0m2.486s 00:35:17.270 sys 0m0.302s 00:35:17.270 09:04:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:17.270 09:04:19 -- common/autotest_common.sh@10 -- # set +x 00:35:17.270 ************************************ 00:35:17.270 END TEST bdev_hello_world 00:35:17.270 ************************************ 00:35:17.270 09:04:19 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:35:17.270 09:04:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:35:17.270 09:04:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:17.270 09:04:19 -- common/autotest_common.sh@10 -- # set +x 00:35:17.270 ************************************ 00:35:17.270 START TEST bdev_bounds 00:35:17.270 ************************************ 00:35:17.270 09:04:19 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:35:17.270 09:04:19 -- bdev/blockdev.sh@290 -- # bdevio_pid=68253 00:35:17.270 Process bdevio pid: 68253 00:35:17.270 09:04:19 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:35:17.270 09:04:19 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:35:17.270 09:04:19 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 68253' 00:35:17.270 09:04:19 -- bdev/blockdev.sh@293 -- # waitforlisten 68253 00:35:17.270 09:04:19 -- common/autotest_common.sh@817 -- # '[' -z 68253 ']' 00:35:17.270 09:04:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:17.270 09:04:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:17.270 09:04:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.270 09:04:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:17.270 09:04:19 -- common/autotest_common.sh@10 -- # set +x 00:35:17.542 [2024-04-18 09:04:19.423509] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:35:17.542 [2024-04-18 09:04:19.423673] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68253 ] 00:35:17.542 [2024-04-18 09:04:19.610227] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:18.140 [2024-04-18 09:04:20.001173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.140 [2024-04-18 09:04:20.001336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:18.140 [2024-04-18 09:04:20.001447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:19.080 09:04:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:19.080 09:04:20 -- common/autotest_common.sh@850 -- # return 0 00:35:19.080 09:04:20 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:35:19.080 I/O targets: 00:35:19.080 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:35:19.080 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:35:19.080 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:35:19.080 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:35:19.080 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:35:19.080 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:35:19.080 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:35:19.080 00:35:19.080 00:35:19.080 CUnit - A unit testing framework for C - Version 2.1-3 00:35:19.080 http://cunit.sourceforge.net/ 00:35:19.080 00:35:19.080 00:35:19.080 Suite: bdevio tests on: Nvme3n1 00:35:19.080 Test: blockdev write read block ...passed 00:35:19.080 Test: blockdev write zeroes read block ...passed 00:35:19.080 Test: blockdev write zeroes read no split ...passed 00:35:19.080 Test: blockdev write zeroes read split ...passed 00:35:19.080 Test: blockdev write zeroes read split partial ...passed 00:35:19.080 Test: blockdev reset ...[2024-04-18 09:04:21.051091] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:35:19.080 passed 00:35:19.080 Test: blockdev write read 8 blocks ...[2024-04-18 09:04:21.055453] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:19.080 passed 00:35:19.080 Test: blockdev write read size > 128k ...passed 00:35:19.080 Test: blockdev write read invalid size ...passed 00:35:19.080 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:19.080 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:19.080 Test: blockdev write read max offset ...passed 00:35:19.080 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:19.080 Test: blockdev writev readv 8 blocks ...passed 00:35:19.080 Test: blockdev writev readv 30 x 1block ...passed 00:35:19.080 Test: blockdev writev readv block ...passed 00:35:19.080 Test: blockdev writev readv size > 128k ...passed 00:35:19.080 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:19.080 Test: blockdev comparev and writev ...passed 00:35:19.080 Test: blockdev nvme passthru rw ...[2024-04-18 09:04:21.063102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1e000a000 len:0x1000 00:35:19.080 [2024-04-18 09:04:21.063169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:35:19.080 passed 00:35:19.080 Test: blockdev nvme passthru vendor specific ...passed 00:35:19.080 Test: blockdev nvme admin passthru ...[2024-04-18 09:04:21.063926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:35:19.080 [2024-04-18 09:04:21.063963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:35:19.080 passed 00:35:19.080 Test: blockdev copy ...passed 00:35:19.080 Suite: bdevio tests on: Nvme2n3 00:35:19.080 Test: blockdev write read block ...passed 00:35:19.080 Test: blockdev write zeroes read block ...passed 00:35:19.080 Test: blockdev write zeroes read no split ...passed 00:35:19.080 Test: blockdev write zeroes read split ...passed 00:35:19.080 Test: blockdev write zeroes read split partial ...passed 00:35:19.080 Test: blockdev reset ...[2024-04-18 09:04:21.152791] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:35:19.080 passed 00:35:19.080 Test: blockdev write read 8 blocks ...[2024-04-18 09:04:21.157452] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:19.080 passed 00:35:19.080 Test: blockdev write read size > 128k ...passed 00:35:19.080 Test: blockdev write read invalid size ...passed 00:35:19.080 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:19.080 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:19.080 Test: blockdev write read max offset ...passed 00:35:19.080 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:19.080 Test: blockdev writev readv 8 blocks ...passed 00:35:19.080 Test: blockdev writev readv 30 x 1block ...passed 00:35:19.080 Test: blockdev writev readv block ...passed 00:35:19.080 Test: blockdev writev readv size > 128k ...passed 00:35:19.080 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:19.080 Test: blockdev comparev and writev ...passed 00:35:19.080 Test: blockdev nvme passthru rw ...[2024-04-18 09:04:21.165027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1d7f04000 len:0x1000 00:35:19.080 [2024-04-18 09:04:21.165091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:35:19.080 passed 00:35:19.080 Test: blockdev nvme passthru vendor specific ...passed 00:35:19.080 Test: blockdev nvme admin passthru ...[2024-04-18 09:04:21.165797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:35:19.080 [2024-04-18 09:04:21.165834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:35:19.080 passed 00:35:19.080 Test: blockdev copy ...passed 00:35:19.080 Suite: bdevio tests on: Nvme2n2 00:35:19.080 Test: blockdev write read block ...passed 00:35:19.080 Test: blockdev write zeroes read block ...passed 00:35:19.339 Test: blockdev write zeroes read no split ...passed 00:35:19.339 Test: blockdev write zeroes read split ...passed 00:35:19.339 Test: blockdev write zeroes read split partial ...passed 00:35:19.339 Test: blockdev reset ...[2024-04-18 09:04:21.250594] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:35:19.339 passed 00:35:19.339 Test: blockdev write read 8 blocks ...[2024-04-18 09:04:21.255042] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:19.339 passed 00:35:19.339 Test: blockdev write read size > 128k ...passed 00:35:19.339 Test: blockdev write read invalid size ...passed 00:35:19.339 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:19.339 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:19.339 Test: blockdev write read max offset ...passed 00:35:19.339 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:19.339 Test: blockdev writev readv 8 blocks ...passed 00:35:19.339 Test: blockdev writev readv 30 x 1block ...passed 00:35:19.339 Test: blockdev writev readv block ...passed 00:35:19.339 Test: blockdev writev readv size > 128k ...passed 00:35:19.339 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:19.339 Test: blockdev comparev and writev ...passed 00:35:19.339 Test: blockdev nvme passthru rw ...[2024-04-18 09:04:21.262676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1d7f04000 len:0x1000 00:35:19.339 [2024-04-18 09:04:21.262736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:35:19.339 passed 00:35:19.339 Test: blockdev nvme passthru vendor specific ...passed 00:35:19.339 Test: blockdev nvme admin passthru ...[2024-04-18 09:04:21.263440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:35:19.339 [2024-04-18 09:04:21.263473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:35:19.339 passed 00:35:19.339 Test: blockdev copy ...passed 00:35:19.339 Suite: bdevio tests on: Nvme2n1 00:35:19.339 Test: blockdev write read block ...passed 00:35:19.339 Test: blockdev write zeroes read block ...passed 00:35:19.339 Test: blockdev write zeroes read no split ...passed 00:35:19.339 Test: blockdev write zeroes read split ...passed 00:35:19.339 Test: blockdev write zeroes read split partial ...passed 00:35:19.339 Test: blockdev reset ...[2024-04-18 09:04:21.348436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:35:19.339 passed 00:35:19.339 Test: blockdev write read 8 blocks ...[2024-04-18 09:04:21.353142] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:19.339 passed 00:35:19.339 Test: blockdev write read size > 128k ...passed 00:35:19.339 Test: blockdev write read invalid size ...passed 00:35:19.339 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:19.339 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:19.339 Test: blockdev write read max offset ...passed 00:35:19.339 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:19.339 Test: blockdev writev readv 8 blocks ...passed 00:35:19.339 Test: blockdev writev readv 30 x 1block ...passed 00:35:19.339 Test: blockdev writev readv block ...passed 00:35:19.339 Test: blockdev writev readv size > 128k ...passed 00:35:19.339 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:19.339 Test: blockdev comparev and writev ...passed 00:35:19.339 Test: blockdev nvme passthru rw ...[2024-04-18 09:04:21.360411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1d303c000 len:0x1000 00:35:19.339 [2024-04-18 09:04:21.360472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:35:19.339 passed 00:35:19.339 Test: blockdev nvme passthru vendor specific ...passed 00:35:19.339 Test: blockdev nvme admin passthru ...[2024-04-18 09:04:21.361084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:35:19.339 [2024-04-18 09:04:21.361122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:35:19.339 passed 00:35:19.339 Test: blockdev copy ...passed 00:35:19.339 Suite: bdevio tests on: Nvme1n1 00:35:19.339 Test: blockdev write read block ...passed 00:35:19.339 Test: blockdev write zeroes read block ...passed 00:35:19.339 Test: blockdev write zeroes read no split ...passed 00:35:19.339 Test: blockdev write zeroes read split ...passed 00:35:19.609 Test: blockdev write zeroes read split partial ...passed 00:35:19.609 Test: blockdev reset ...[2024-04-18 09:04:21.451042] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:35:19.609 passed 00:35:19.609 Test: blockdev write read 8 blocks ...[2024-04-18 09:04:21.455524] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:19.609 passed 00:35:19.609 Test: blockdev write read size > 128k ...passed 00:35:19.609 Test: blockdev write read invalid size ...passed 00:35:19.609 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:19.609 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:19.609 Test: blockdev write read max offset ...passed 00:35:19.609 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:19.609 Test: blockdev writev readv 8 blocks ...passed 00:35:19.609 Test: blockdev writev readv 30 x 1block ...passed 00:35:19.609 Test: blockdev writev readv block ...passed 00:35:19.609 Test: blockdev writev readv size > 128k ...passed 00:35:19.609 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:19.609 Test: blockdev comparev and writev ...passed 00:35:19.609 Test: blockdev nvme passthru rw ...[2024-04-18 09:04:21.462593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1d3038000 len:0x1000 00:35:19.609 [2024-04-18 09:04:21.462658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:35:19.609 passed 00:35:19.609 Test: blockdev nvme passthru vendor specific ...passed 00:35:19.609 Test: blockdev nvme admin passthru ...[2024-04-18 09:04:21.463299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:35:19.609 [2024-04-18 09:04:21.463339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:35:19.609 passed 00:35:19.609 Test: blockdev copy ...passed 00:35:19.609 Suite: bdevio tests on: Nvme0n1p2 00:35:19.609 Test: blockdev write read block ...passed 00:35:19.609 Test: blockdev write zeroes read block ...passed 00:35:19.609 Test: blockdev write zeroes read no split ...passed 00:35:19.609 Test: blockdev write zeroes read split ...passed 00:35:19.609 Test: blockdev write zeroes read split partial ...passed 00:35:19.609 Test: blockdev reset ...[2024-04-18 09:04:21.551485] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:35:19.609 passed 00:35:19.609 Test: blockdev write read 8 blocks ...[2024-04-18 09:04:21.555901] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:19.609 passed 00:35:19.609 Test: blockdev write read size > 128k ...passed 00:35:19.609 Test: blockdev write read invalid size ...passed 00:35:19.609 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:19.609 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:19.609 Test: blockdev write read max offset ...passed 00:35:19.609 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:19.609 Test: blockdev writev readv 8 blocks ...passed 00:35:19.609 Test: blockdev writev readv 30 x 1block ...passed 00:35:19.609 Test: blockdev writev readv block ...passed 00:35:19.609 Test: blockdev writev readv size > 128k ...passed 00:35:19.609 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:19.609 Test: blockdev comparev and writev ...passed 00:35:19.609 Test: blockdev nvme passthru rw ...passed 00:35:19.609 Test: blockdev nvme passthru vendor specific ...passed 00:35:19.609 Test: blockdev nvme admin passthru ...passed 00:35:19.610 Test: blockdev copy ...[2024-04-18 09:04:21.562686] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:35:19.610 separate metadata which is not supported yet. 00:35:19.610 passed 00:35:19.610 Suite: bdevio tests on: Nvme0n1p1 00:35:19.610 Test: blockdev write read block ...passed 00:35:19.610 Test: blockdev write zeroes read block ...passed 00:35:19.610 Test: blockdev write zeroes read no split ...passed 00:35:19.610 Test: blockdev write zeroes read split ...passed 00:35:19.610 Test: blockdev write zeroes read split partial ...passed 00:35:19.610 Test: blockdev reset ...[2024-04-18 09:04:21.641929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:35:19.610 passed 00:35:19.610 Test: blockdev write read 8 blocks ...[2024-04-18 09:04:21.646583] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:19.610 passed 00:35:19.610 Test: blockdev write read size > 128k ...passed 00:35:19.610 Test: blockdev write read invalid size ...passed 00:35:19.610 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:19.610 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:19.610 Test: blockdev write read max offset ...passed 00:35:19.610 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:19.610 Test: blockdev writev readv 8 blocks ...passed 00:35:19.610 Test: blockdev writev readv 30 x 1block ...passed 00:35:19.610 Test: blockdev writev readv block ...passed 00:35:19.610 Test: blockdev writev readv size > 128k ...passed 00:35:19.610 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:19.610 Test: blockdev comparev and writev ...[2024-04-18 09:04:21.654126] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:35:19.610 separate metadata which is not supported yet. 00:35:19.610 passed 00:35:19.610 Test: blockdev nvme passthru rw ...passed 00:35:19.610 Test: blockdev nvme passthru vendor specific ...passed 00:35:19.610 Test: blockdev nvme admin passthru ...passed 00:35:19.610 Test: blockdev copy ...passed 00:35:19.610 00:35:19.610 Run Summary: Type Total Ran Passed Failed Inactive 00:35:19.610 suites 7 7 n/a 0 0 00:35:19.610 tests 161 161 161 0 0 00:35:19.610 asserts 1006 1006 1006 0 n/a 00:35:19.610 00:35:19.610 Elapsed time = 1.900 seconds 00:35:19.610 0 00:35:19.610 09:04:21 -- bdev/blockdev.sh@295 -- # killprocess 68253 00:35:19.610 09:04:21 -- common/autotest_common.sh@936 -- # '[' -z 68253 ']' 00:35:19.610 09:04:21 -- common/autotest_common.sh@940 -- # kill -0 68253 00:35:19.610 09:04:21 -- common/autotest_common.sh@941 -- # uname 00:35:19.610 09:04:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:19.610 09:04:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68253 00:35:19.610 09:04:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:19.610 09:04:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:19.610 killing process with pid 68253 00:35:19.610 09:04:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68253' 00:35:19.610 09:04:21 -- common/autotest_common.sh@955 -- # kill 68253 00:35:19.610 09:04:21 -- common/autotest_common.sh@960 -- # wait 68253 00:35:21.016 09:04:23 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:35:21.016 00:35:21.016 real 0m3.727s 00:35:21.016 user 0m8.960s 00:35:21.016 sys 0m0.580s 00:35:21.016 09:04:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:21.016 09:04:23 -- common/autotest_common.sh@10 -- # set +x 00:35:21.016 ************************************ 00:35:21.016 END TEST bdev_bounds 00:35:21.016 ************************************ 00:35:21.016 09:04:23 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:35:21.016 09:04:23 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:35:21.016 09:04:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:21.016 09:04:23 -- common/autotest_common.sh@10 -- # set +x 00:35:21.274 ************************************ 00:35:21.274 START TEST bdev_nbd 00:35:21.274 ************************************ 00:35:21.274 09:04:23 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:35:21.274 09:04:23 -- bdev/blockdev.sh@300 -- # uname -s 00:35:21.274 09:04:23 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:35:21.274 09:04:23 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:21.274 09:04:23 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:21.274 09:04:23 -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:35:21.274 09:04:23 -- bdev/blockdev.sh@304 -- # local bdev_all 00:35:21.274 09:04:23 -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:35:21.274 09:04:23 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:35:21.274 09:04:23 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:35:21.274 09:04:23 -- bdev/blockdev.sh@311 -- # local nbd_all 00:35:21.274 09:04:23 -- bdev/blockdev.sh@312 -- # bdev_num=7 00:35:21.274 09:04:23 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:35:21.274 09:04:23 -- bdev/blockdev.sh@314 -- # local nbd_list 00:35:21.274 09:04:23 -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:35:21.274 09:04:23 -- bdev/blockdev.sh@315 -- # local bdev_list 00:35:21.274 09:04:23 -- bdev/blockdev.sh@318 -- # nbd_pid=68329 00:35:21.274 09:04:23 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:35:21.274 09:04:23 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:35:21.274 09:04:23 -- bdev/blockdev.sh@320 -- # waitforlisten 68329 /var/tmp/spdk-nbd.sock 00:35:21.274 09:04:23 -- common/autotest_common.sh@817 -- # '[' -z 68329 ']' 00:35:21.274 09:04:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:35:21.274 09:04:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:21.274 09:04:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:35:21.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:35:21.274 09:04:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:21.274 09:04:23 -- common/autotest_common.sh@10 -- # set +x 00:35:21.274 [2024-04-18 09:04:23.298710] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:35:21.274 [2024-04-18 09:04:23.299134] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:21.532 [2024-04-18 09:04:23.498051] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.789 [2024-04-18 09:04:23.767979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:22.727 09:04:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:22.727 09:04:24 -- common/autotest_common.sh@850 -- # return 0 00:35:22.728 09:04:24 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:35:22.728 09:04:24 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:22.728 09:04:24 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:35:22.728 09:04:24 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:35:22.728 09:04:24 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:35:22.728 09:04:24 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:22.728 09:04:24 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:35:22.728 09:04:24 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:35:22.728 09:04:24 -- bdev/nbd_common.sh@24 -- # local i 00:35:22.728 09:04:24 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:35:22.728 09:04:24 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:35:22.728 09:04:24 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:35:22.728 09:04:24 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:35:22.986 09:04:24 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:35:22.987 09:04:24 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:35:22.987 09:04:24 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:35:22.987 09:04:24 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:35:22.987 09:04:24 -- common/autotest_common.sh@855 -- # local i 00:35:22.987 09:04:24 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:22.987 09:04:24 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:22.987 09:04:24 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:35:22.987 09:04:24 -- common/autotest_common.sh@859 -- # break 00:35:22.987 09:04:24 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:22.987 09:04:24 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:22.987 09:04:24 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:22.987 1+0 records in 00:35:22.987 1+0 records out 00:35:22.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000777012 s, 5.3 MB/s 00:35:22.987 09:04:24 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:22.987 09:04:24 -- common/autotest_common.sh@872 -- # size=4096 00:35:22.987 09:04:24 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:22.987 09:04:24 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:22.987 09:04:24 -- common/autotest_common.sh@875 -- # return 0 00:35:22.987 09:04:24 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:22.987 09:04:24 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:35:22.987 09:04:24 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:35:23.245 09:04:25 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:35:23.245 09:04:25 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:35:23.245 09:04:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:35:23.245 09:04:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:35:23.245 09:04:25 -- common/autotest_common.sh@855 -- # local i 00:35:23.245 09:04:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:23.245 09:04:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:23.245 09:04:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:35:23.245 09:04:25 -- common/autotest_common.sh@859 -- # break 00:35:23.245 09:04:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:23.245 09:04:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:23.245 09:04:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:23.245 1+0 records in 00:35:23.245 1+0 records out 00:35:23.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651167 s, 6.3 MB/s 00:35:23.245 09:04:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:23.245 09:04:25 -- common/autotest_common.sh@872 -- # size=4096 00:35:23.245 09:04:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:23.245 09:04:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:23.245 09:04:25 -- common/autotest_common.sh@875 -- # return 0 00:35:23.245 09:04:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:23.245 09:04:25 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:35:23.245 09:04:25 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:35:23.502 09:04:25 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:35:23.502 09:04:25 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:35:23.502 09:04:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:35:23.502 09:04:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:35:23.502 09:04:25 -- common/autotest_common.sh@855 -- # local i 00:35:23.502 09:04:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:23.502 09:04:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:23.502 09:04:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:35:23.502 09:04:25 -- common/autotest_common.sh@859 -- # break 00:35:23.502 09:04:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:23.502 09:04:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:23.502 09:04:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:23.502 1+0 records in 00:35:23.502 1+0 records out 00:35:23.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000769769 s, 5.3 MB/s 00:35:23.502 09:04:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:23.502 09:04:25 -- common/autotest_common.sh@872 -- # size=4096 00:35:23.502 09:04:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:23.502 09:04:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:23.502 09:04:25 -- common/autotest_common.sh@875 -- # return 0 00:35:23.503 09:04:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:23.503 09:04:25 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:35:23.503 09:04:25 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:35:23.815 09:04:25 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:35:23.815 09:04:25 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:35:23.815 09:04:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:35:23.815 09:04:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:35:23.815 09:04:25 -- common/autotest_common.sh@855 -- # local i 00:35:23.815 09:04:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:23.815 09:04:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:23.815 09:04:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:35:23.815 09:04:25 -- common/autotest_common.sh@859 -- # break 00:35:23.815 09:04:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:23.815 09:04:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:23.815 09:04:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:23.815 1+0 records in 00:35:23.815 1+0 records out 00:35:23.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000823256 s, 5.0 MB/s 00:35:23.815 09:04:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:23.815 09:04:25 -- common/autotest_common.sh@872 -- # size=4096 00:35:23.815 09:04:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:23.815 09:04:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:23.815 09:04:25 -- common/autotest_common.sh@875 -- # return 0 00:35:23.815 09:04:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:23.815 09:04:25 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:35:23.815 09:04:25 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:35:24.073 09:04:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:35:24.073 09:04:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:35:24.073 09:04:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:35:24.073 09:04:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:35:24.073 09:04:26 -- common/autotest_common.sh@855 -- # local i 00:35:24.073 09:04:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:24.073 09:04:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:24.073 09:04:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:35:24.073 09:04:26 -- common/autotest_common.sh@859 -- # break 00:35:24.073 09:04:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:24.073 09:04:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:24.073 09:04:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:24.073 1+0 records in 00:35:24.073 1+0 records out 00:35:24.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000977363 s, 4.2 MB/s 00:35:24.073 09:04:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:24.073 09:04:26 -- common/autotest_common.sh@872 -- # size=4096 00:35:24.073 09:04:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:24.073 09:04:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:24.073 09:04:26 -- common/autotest_common.sh@875 -- # return 0 00:35:24.073 09:04:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:24.073 09:04:26 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:35:24.073 09:04:26 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:35:24.638 09:04:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:35:24.638 09:04:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:35:24.638 09:04:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:35:24.638 09:04:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:35:24.638 09:04:26 -- common/autotest_common.sh@855 -- # local i 00:35:24.638 09:04:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:24.638 09:04:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:24.639 09:04:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:35:24.639 09:04:26 -- common/autotest_common.sh@859 -- # break 00:35:24.639 09:04:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:24.639 09:04:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:24.639 09:04:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:24.639 1+0 records in 00:35:24.639 1+0 records out 00:35:24.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123472 s, 3.3 MB/s 00:35:24.639 09:04:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:24.639 09:04:26 -- common/autotest_common.sh@872 -- # size=4096 00:35:24.639 09:04:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:24.639 09:04:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:24.639 09:04:26 -- common/autotest_common.sh@875 -- # return 0 00:35:24.639 09:04:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:24.639 09:04:26 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:35:24.639 09:04:26 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:35:24.896 09:04:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:35:24.896 09:04:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:35:24.896 09:04:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:35:24.896 09:04:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd6 00:35:24.896 09:04:26 -- common/autotest_common.sh@855 -- # local i 00:35:24.896 09:04:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:24.896 09:04:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:24.896 09:04:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd6 /proc/partitions 00:35:24.896 09:04:26 -- common/autotest_common.sh@859 -- # break 00:35:24.896 09:04:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:24.896 09:04:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:24.896 09:04:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:24.896 1+0 records in 00:35:24.896 1+0 records out 00:35:24.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000959529 s, 4.3 MB/s 00:35:24.896 09:04:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:24.896 09:04:26 -- common/autotest_common.sh@872 -- # size=4096 00:35:24.896 09:04:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:24.896 09:04:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:24.897 09:04:26 -- common/autotest_common.sh@875 -- # return 0 00:35:24.897 09:04:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:24.897 09:04:26 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:35:24.897 09:04:26 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:25.154 09:04:27 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:35:25.154 { 00:35:25.154 "nbd_device": "/dev/nbd0", 00:35:25.154 "bdev_name": "Nvme0n1p1" 00:35:25.154 }, 00:35:25.154 { 00:35:25.154 "nbd_device": "/dev/nbd1", 00:35:25.154 "bdev_name": "Nvme0n1p2" 00:35:25.154 }, 00:35:25.154 { 00:35:25.154 "nbd_device": "/dev/nbd2", 00:35:25.154 "bdev_name": "Nvme1n1" 00:35:25.154 }, 00:35:25.154 { 00:35:25.154 "nbd_device": "/dev/nbd3", 00:35:25.154 "bdev_name": "Nvme2n1" 00:35:25.154 }, 00:35:25.154 { 00:35:25.154 "nbd_device": "/dev/nbd4", 00:35:25.154 "bdev_name": "Nvme2n2" 00:35:25.154 }, 00:35:25.154 { 00:35:25.154 "nbd_device": "/dev/nbd5", 00:35:25.154 "bdev_name": "Nvme2n3" 00:35:25.154 }, 00:35:25.154 { 00:35:25.154 "nbd_device": "/dev/nbd6", 00:35:25.154 "bdev_name": "Nvme3n1" 00:35:25.154 } 00:35:25.154 ]' 00:35:25.154 09:04:27 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:35:25.154 09:04:27 -- bdev/nbd_common.sh@119 -- # echo '[ 00:35:25.154 { 00:35:25.154 "nbd_device": "/dev/nbd0", 00:35:25.154 "bdev_name": "Nvme0n1p1" 00:35:25.154 }, 00:35:25.154 { 00:35:25.154 "nbd_device": "/dev/nbd1", 00:35:25.154 "bdev_name": "Nvme0n1p2" 00:35:25.154 }, 00:35:25.154 { 00:35:25.154 "nbd_device": "/dev/nbd2", 00:35:25.154 "bdev_name": "Nvme1n1" 00:35:25.154 }, 00:35:25.155 { 00:35:25.155 "nbd_device": "/dev/nbd3", 00:35:25.155 "bdev_name": "Nvme2n1" 00:35:25.155 }, 00:35:25.155 { 00:35:25.155 "nbd_device": "/dev/nbd4", 00:35:25.155 "bdev_name": "Nvme2n2" 00:35:25.155 }, 00:35:25.155 { 00:35:25.155 "nbd_device": "/dev/nbd5", 00:35:25.155 "bdev_name": "Nvme2n3" 00:35:25.155 }, 00:35:25.155 { 00:35:25.155 "nbd_device": "/dev/nbd6", 00:35:25.155 "bdev_name": "Nvme3n1" 00:35:25.155 } 00:35:25.155 ]' 00:35:25.155 09:04:27 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:35:25.155 09:04:27 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:35:25.155 09:04:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:25.155 09:04:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:35:25.155 09:04:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:25.155 09:04:27 -- bdev/nbd_common.sh@51 -- # local i 00:35:25.155 09:04:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:25.155 09:04:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:25.412 09:04:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:25.412 09:04:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:25.412 09:04:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:25.412 09:04:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:25.412 09:04:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:25.412 09:04:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:25.412 09:04:27 -- bdev/nbd_common.sh@41 -- # break 00:35:25.412 09:04:27 -- bdev/nbd_common.sh@45 -- # return 0 00:35:25.412 09:04:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:25.412 09:04:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:35:25.671 09:04:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:25.671 09:04:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:25.671 09:04:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:25.671 09:04:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:25.671 09:04:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:25.671 09:04:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:25.671 09:04:27 -- bdev/nbd_common.sh@41 -- # break 00:35:25.671 09:04:27 -- bdev/nbd_common.sh@45 -- # return 0 00:35:25.671 09:04:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:25.671 09:04:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:35:25.929 09:04:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:35:25.929 09:04:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:35:25.929 09:04:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:35:25.929 09:04:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:25.929 09:04:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:25.929 09:04:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:35:25.929 09:04:27 -- bdev/nbd_common.sh@41 -- # break 00:35:25.929 09:04:27 -- bdev/nbd_common.sh@45 -- # return 0 00:35:25.929 09:04:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:25.929 09:04:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:35:25.929 09:04:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:35:26.188 09:04:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:35:26.188 09:04:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:35:26.188 09:04:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:26.188 09:04:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:26.188 09:04:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:35:26.188 09:04:28 -- bdev/nbd_common.sh@41 -- # break 00:35:26.188 09:04:28 -- bdev/nbd_common.sh@45 -- # return 0 00:35:26.188 09:04:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:26.188 09:04:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:35:26.188 09:04:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:35:26.188 09:04:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:35:26.188 09:04:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:35:26.188 09:04:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:26.188 09:04:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:26.188 09:04:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:35:26.188 09:04:28 -- bdev/nbd_common.sh@41 -- # break 00:35:26.188 09:04:28 -- bdev/nbd_common.sh@45 -- # return 0 00:35:26.188 09:04:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:26.188 09:04:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:35:26.445 09:04:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:35:26.445 09:04:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:35:26.445 09:04:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:35:26.445 09:04:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:26.445 09:04:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:26.445 09:04:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:35:26.445 09:04:28 -- bdev/nbd_common.sh@41 -- # break 00:35:26.445 09:04:28 -- bdev/nbd_common.sh@45 -- # return 0 00:35:26.445 09:04:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:26.445 09:04:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:35:26.704 09:04:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:35:26.704 09:04:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:35:26.704 09:04:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:35:26.704 09:04:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:26.704 09:04:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:26.704 09:04:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:35:26.704 09:04:28 -- bdev/nbd_common.sh@41 -- # break 00:35:26.704 09:04:28 -- bdev/nbd_common.sh@45 -- # return 0 00:35:26.704 09:04:28 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:26.704 09:04:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:26.704 09:04:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:26.962 09:04:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:26.962 09:04:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:26.962 09:04:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@65 -- # true 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@65 -- # count=0 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@122 -- # count=0 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@127 -- # return 0 00:35:26.962 09:04:29 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@12 -- # local i 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:35:26.962 09:04:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:35:27.219 /dev/nbd0 00:35:27.219 09:04:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:27.219 09:04:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:27.219 09:04:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:35:27.219 09:04:29 -- common/autotest_common.sh@855 -- # local i 00:35:27.219 09:04:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:27.219 09:04:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:27.219 09:04:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:35:27.219 09:04:29 -- common/autotest_common.sh@859 -- # break 00:35:27.219 09:04:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:27.219 09:04:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:27.219 09:04:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:27.219 1+0 records in 00:35:27.219 1+0 records out 00:35:27.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517669 s, 7.9 MB/s 00:35:27.219 09:04:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:27.219 09:04:29 -- common/autotest_common.sh@872 -- # size=4096 00:35:27.219 09:04:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:27.219 09:04:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:27.219 09:04:29 -- common/autotest_common.sh@875 -- # return 0 00:35:27.219 09:04:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:27.219 09:04:29 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:35:27.219 09:04:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:35:27.477 /dev/nbd1 00:35:27.477 09:04:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:27.477 09:04:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:27.477 09:04:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:35:27.477 09:04:29 -- common/autotest_common.sh@855 -- # local i 00:35:27.477 09:04:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:27.477 09:04:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:27.477 09:04:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:35:27.734 09:04:29 -- common/autotest_common.sh@859 -- # break 00:35:27.734 09:04:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:27.734 09:04:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:27.734 09:04:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:27.734 1+0 records in 00:35:27.734 1+0 records out 00:35:27.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620872 s, 6.6 MB/s 00:35:27.734 09:04:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:27.734 09:04:29 -- common/autotest_common.sh@872 -- # size=4096 00:35:27.734 09:04:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:27.734 09:04:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:27.734 09:04:29 -- common/autotest_common.sh@875 -- # return 0 00:35:27.734 09:04:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:27.734 09:04:29 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:35:27.734 09:04:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:35:27.734 /dev/nbd10 00:35:27.734 09:04:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:35:27.734 09:04:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:35:27.734 09:04:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:35:27.734 09:04:29 -- common/autotest_common.sh@855 -- # local i 00:35:27.734 09:04:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:27.734 09:04:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:27.734 09:04:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:35:27.734 09:04:29 -- common/autotest_common.sh@859 -- # break 00:35:27.734 09:04:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:27.994 09:04:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:27.994 09:04:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:27.994 1+0 records in 00:35:27.994 1+0 records out 00:35:27.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541996 s, 7.6 MB/s 00:35:27.994 09:04:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:27.994 09:04:29 -- common/autotest_common.sh@872 -- # size=4096 00:35:27.994 09:04:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:27.994 09:04:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:27.994 09:04:29 -- common/autotest_common.sh@875 -- # return 0 00:35:27.994 09:04:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:27.994 09:04:29 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:35:27.994 09:04:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:35:27.994 /dev/nbd11 00:35:27.994 09:04:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:35:27.994 09:04:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:35:27.994 09:04:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:35:27.994 09:04:30 -- common/autotest_common.sh@855 -- # local i 00:35:27.994 09:04:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:28.253 09:04:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:28.253 09:04:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:35:28.253 09:04:30 -- common/autotest_common.sh@859 -- # break 00:35:28.253 09:04:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:28.253 09:04:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:28.253 09:04:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:28.253 1+0 records in 00:35:28.253 1+0 records out 00:35:28.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000616 s, 6.6 MB/s 00:35:28.253 09:04:30 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:28.253 09:04:30 -- common/autotest_common.sh@872 -- # size=4096 00:35:28.253 09:04:30 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:28.253 09:04:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:28.253 09:04:30 -- common/autotest_common.sh@875 -- # return 0 00:35:28.253 09:04:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:28.253 09:04:30 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:35:28.253 09:04:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:35:28.253 /dev/nbd12 00:35:28.253 09:04:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:35:28.253 09:04:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:35:28.253 09:04:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:35:28.253 09:04:30 -- common/autotest_common.sh@855 -- # local i 00:35:28.253 09:04:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:28.253 09:04:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:28.253 09:04:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:35:28.253 09:04:30 -- common/autotest_common.sh@859 -- # break 00:35:28.253 09:04:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:28.253 09:04:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:28.253 09:04:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:28.253 1+0 records in 00:35:28.253 1+0 records out 00:35:28.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510017 s, 8.0 MB/s 00:35:28.253 09:04:30 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:28.253 09:04:30 -- common/autotest_common.sh@872 -- # size=4096 00:35:28.253 09:04:30 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:28.253 09:04:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:28.253 09:04:30 -- common/autotest_common.sh@875 -- # return 0 00:35:28.253 09:04:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:28.253 09:04:30 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:35:28.253 09:04:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:35:28.512 /dev/nbd13 00:35:28.512 09:04:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:35:28.512 09:04:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:35:28.512 09:04:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:35:28.512 09:04:30 -- common/autotest_common.sh@855 -- # local i 00:35:28.512 09:04:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:28.512 09:04:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:28.512 09:04:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:35:28.512 09:04:30 -- common/autotest_common.sh@859 -- # break 00:35:28.512 09:04:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:28.512 09:04:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:28.512 09:04:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:28.512 1+0 records in 00:35:28.512 1+0 records out 00:35:28.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000769428 s, 5.3 MB/s 00:35:28.512 09:04:30 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:28.512 09:04:30 -- common/autotest_common.sh@872 -- # size=4096 00:35:28.512 09:04:30 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:28.512 09:04:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:28.512 09:04:30 -- common/autotest_common.sh@875 -- # return 0 00:35:28.512 09:04:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:28.512 09:04:30 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:35:28.512 09:04:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:35:28.771 /dev/nbd14 00:35:28.771 09:04:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:35:28.771 09:04:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:35:28.771 09:04:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd14 00:35:28.771 09:04:30 -- common/autotest_common.sh@855 -- # local i 00:35:28.771 09:04:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:28.771 09:04:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:28.771 09:04:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd14 /proc/partitions 00:35:28.771 09:04:30 -- common/autotest_common.sh@859 -- # break 00:35:28.771 09:04:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:28.771 09:04:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:28.771 09:04:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:28.771 1+0 records in 00:35:28.771 1+0 records out 00:35:28.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651317 s, 6.3 MB/s 00:35:28.771 09:04:30 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:28.771 09:04:30 -- common/autotest_common.sh@872 -- # size=4096 00:35:28.771 09:04:30 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:28.771 09:04:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:28.771 09:04:30 -- common/autotest_common.sh@875 -- # return 0 00:35:28.771 09:04:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:28.771 09:04:30 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:35:28.771 09:04:30 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:28.771 09:04:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:28.771 09:04:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:29.339 09:04:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:35:29.339 { 00:35:29.339 "nbd_device": "/dev/nbd0", 00:35:29.339 "bdev_name": "Nvme0n1p1" 00:35:29.339 }, 00:35:29.339 { 00:35:29.339 "nbd_device": "/dev/nbd1", 00:35:29.339 "bdev_name": "Nvme0n1p2" 00:35:29.339 }, 00:35:29.339 { 00:35:29.339 "nbd_device": "/dev/nbd10", 00:35:29.339 "bdev_name": "Nvme1n1" 00:35:29.339 }, 00:35:29.339 { 00:35:29.339 "nbd_device": "/dev/nbd11", 00:35:29.339 "bdev_name": "Nvme2n1" 00:35:29.339 }, 00:35:29.339 { 00:35:29.339 "nbd_device": "/dev/nbd12", 00:35:29.339 "bdev_name": "Nvme2n2" 00:35:29.339 }, 00:35:29.339 { 00:35:29.340 "nbd_device": "/dev/nbd13", 00:35:29.340 "bdev_name": "Nvme2n3" 00:35:29.340 }, 00:35:29.340 { 00:35:29.340 "nbd_device": "/dev/nbd14", 00:35:29.340 "bdev_name": "Nvme3n1" 00:35:29.340 } 00:35:29.340 ]' 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@64 -- # echo '[ 00:35:29.340 { 00:35:29.340 "nbd_device": "/dev/nbd0", 00:35:29.340 "bdev_name": "Nvme0n1p1" 00:35:29.340 }, 00:35:29.340 { 00:35:29.340 "nbd_device": "/dev/nbd1", 00:35:29.340 "bdev_name": "Nvme0n1p2" 00:35:29.340 }, 00:35:29.340 { 00:35:29.340 "nbd_device": "/dev/nbd10", 00:35:29.340 "bdev_name": "Nvme1n1" 00:35:29.340 }, 00:35:29.340 { 00:35:29.340 "nbd_device": "/dev/nbd11", 00:35:29.340 "bdev_name": "Nvme2n1" 00:35:29.340 }, 00:35:29.340 { 00:35:29.340 "nbd_device": "/dev/nbd12", 00:35:29.340 "bdev_name": "Nvme2n2" 00:35:29.340 }, 00:35:29.340 { 00:35:29.340 "nbd_device": "/dev/nbd13", 00:35:29.340 "bdev_name": "Nvme2n3" 00:35:29.340 }, 00:35:29.340 { 00:35:29.340 "nbd_device": "/dev/nbd14", 00:35:29.340 "bdev_name": "Nvme3n1" 00:35:29.340 } 00:35:29.340 ]' 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:35:29.340 /dev/nbd1 00:35:29.340 /dev/nbd10 00:35:29.340 /dev/nbd11 00:35:29.340 /dev/nbd12 00:35:29.340 /dev/nbd13 00:35:29.340 /dev/nbd14' 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:35:29.340 /dev/nbd1 00:35:29.340 /dev/nbd10 00:35:29.340 /dev/nbd11 00:35:29.340 /dev/nbd12 00:35:29.340 /dev/nbd13 00:35:29.340 /dev/nbd14' 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@65 -- # count=7 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@66 -- # echo 7 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@95 -- # count=7 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@71 -- # local operation=write 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:35:29.340 256+0 records in 00:35:29.340 256+0 records out 00:35:29.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00697858 s, 150 MB/s 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:35:29.340 256+0 records in 00:35:29.340 256+0 records out 00:35:29.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141985 s, 7.4 MB/s 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:29.340 09:04:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:35:29.599 256+0 records in 00:35:29.599 256+0 records out 00:35:29.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149155 s, 7.0 MB/s 00:35:29.599 09:04:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:29.599 09:04:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:35:29.858 256+0 records in 00:35:29.858 256+0 records out 00:35:29.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.190359 s, 5.5 MB/s 00:35:29.858 09:04:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:29.858 09:04:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:35:29.858 256+0 records in 00:35:29.858 256+0 records out 00:35:29.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.184811 s, 5.7 MB/s 00:35:29.858 09:04:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:29.858 09:04:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:35:30.117 256+0 records in 00:35:30.117 256+0 records out 00:35:30.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147719 s, 7.1 MB/s 00:35:30.117 09:04:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:30.117 09:04:32 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:35:30.375 256+0 records in 00:35:30.375 256+0 records out 00:35:30.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148828 s, 7.0 MB/s 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:35:30.375 256+0 records in 00:35:30.375 256+0 records out 00:35:30.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167143 s, 6.3 MB/s 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:30.375 09:04:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:35:30.634 09:04:32 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:30.634 09:04:32 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:35:30.634 09:04:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:30.634 09:04:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:35:30.634 09:04:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:30.634 09:04:32 -- bdev/nbd_common.sh@51 -- # local i 00:35:30.634 09:04:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:30.634 09:04:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:30.634 09:04:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:30.634 09:04:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:30.634 09:04:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:30.634 09:04:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:30.634 09:04:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:30.634 09:04:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:30.634 09:04:32 -- bdev/nbd_common.sh@41 -- # break 00:35:30.634 09:04:32 -- bdev/nbd_common.sh@45 -- # return 0 00:35:30.634 09:04:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:30.634 09:04:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:35:30.893 09:04:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:30.893 09:04:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:30.893 09:04:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:30.893 09:04:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:30.893 09:04:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:30.893 09:04:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:31.151 09:04:32 -- bdev/nbd_common.sh@41 -- # break 00:35:31.151 09:04:32 -- bdev/nbd_common.sh@45 -- # return 0 00:35:31.151 09:04:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:31.151 09:04:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@41 -- # break 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@45 -- # return 0 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@41 -- # break 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@45 -- # return 0 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:31.410 09:04:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:35:31.669 09:04:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:35:31.669 09:04:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:35:31.669 09:04:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:35:31.669 09:04:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:31.669 09:04:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:31.669 09:04:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:35:31.669 09:04:33 -- bdev/nbd_common.sh@41 -- # break 00:35:31.669 09:04:33 -- bdev/nbd_common.sh@45 -- # return 0 00:35:31.669 09:04:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:31.669 09:04:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:35:31.927 09:04:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:35:31.927 09:04:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:35:31.927 09:04:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:35:31.927 09:04:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:31.927 09:04:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:31.927 09:04:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:35:31.927 09:04:33 -- bdev/nbd_common.sh@41 -- # break 00:35:31.927 09:04:33 -- bdev/nbd_common.sh@45 -- # return 0 00:35:31.927 09:04:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:31.927 09:04:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:35:32.184 09:04:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:35:32.184 09:04:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:35:32.184 09:04:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:35:32.184 09:04:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:32.184 09:04:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:32.184 09:04:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:35:32.184 09:04:34 -- bdev/nbd_common.sh@41 -- # break 00:35:32.184 09:04:34 -- bdev/nbd_common.sh@45 -- # return 0 00:35:32.184 09:04:34 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:32.184 09:04:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:32.184 09:04:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:32.442 09:04:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:32.442 09:04:34 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:32.442 09:04:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:32.442 09:04:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:32.442 09:04:34 -- bdev/nbd_common.sh@65 -- # echo '' 00:35:32.442 09:04:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:32.442 09:04:34 -- bdev/nbd_common.sh@65 -- # true 00:35:32.442 09:04:34 -- bdev/nbd_common.sh@65 -- # count=0 00:35:32.442 09:04:34 -- bdev/nbd_common.sh@66 -- # echo 0 00:35:32.442 09:04:34 -- bdev/nbd_common.sh@104 -- # count=0 00:35:32.442 09:04:34 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:35:32.442 09:04:34 -- bdev/nbd_common.sh@109 -- # return 0 00:35:32.442 09:04:34 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:35:32.442 09:04:34 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:32.442 09:04:34 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:35:32.442 09:04:34 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:35:32.442 09:04:34 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:35:32.442 09:04:34 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:35:32.699 malloc_lvol_verify 00:35:32.699 09:04:34 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:35:32.956 055bab7a-1d49-497f-a520-ba91b054e763 00:35:33.214 09:04:35 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:35:33.512 4bb69c26-5fc7-4f87-b304-33535ac508a0 00:35:33.513 09:04:35 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:35:33.772 /dev/nbd0 00:35:33.772 09:04:35 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:35:33.772 mke2fs 1.46.5 (30-Dec-2021) 00:35:33.772 Discarding device blocks: 0/4096 done 00:35:33.772 Creating filesystem with 4096 1k blocks and 1024 inodes 00:35:33.772 00:35:33.772 Allocating group tables: 0/1 done 00:35:33.772 Writing inode tables: 0/1 done 00:35:33.772 Creating journal (1024 blocks): done 00:35:33.772 Writing superblocks and filesystem accounting information: 0/1 done 00:35:33.772 00:35:33.772 09:04:35 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:35:33.772 09:04:35 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:33.772 09:04:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:33.772 09:04:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:33.772 09:04:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:33.772 09:04:35 -- bdev/nbd_common.sh@51 -- # local i 00:35:33.772 09:04:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:33.772 09:04:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:34.030 09:04:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:34.030 09:04:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:34.030 09:04:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:34.030 09:04:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:34.030 09:04:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:34.030 09:04:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:34.030 09:04:36 -- bdev/nbd_common.sh@41 -- # break 00:35:34.030 09:04:36 -- bdev/nbd_common.sh@45 -- # return 0 00:35:34.030 09:04:36 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:35:34.030 09:04:36 -- bdev/nbd_common.sh@147 -- # return 0 00:35:34.030 09:04:36 -- bdev/blockdev.sh@326 -- # killprocess 68329 00:35:34.030 09:04:36 -- common/autotest_common.sh@936 -- # '[' -z 68329 ']' 00:35:34.030 09:04:36 -- common/autotest_common.sh@940 -- # kill -0 68329 00:35:34.030 09:04:36 -- common/autotest_common.sh@941 -- # uname 00:35:34.030 09:04:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:34.030 09:04:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68329 00:35:34.030 09:04:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:34.030 09:04:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:34.030 09:04:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68329' 00:35:34.030 killing process with pid 68329 00:35:34.030 09:04:36 -- common/autotest_common.sh@955 -- # kill 68329 00:35:34.030 09:04:36 -- common/autotest_common.sh@960 -- # wait 68329 00:35:35.928 09:04:37 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:35:35.928 00:35:35.928 real 0m14.491s 00:35:35.929 user 0m19.104s 00:35:35.929 sys 0m5.333s 00:35:35.929 09:04:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:35.929 09:04:37 -- common/autotest_common.sh@10 -- # set +x 00:35:35.929 ************************************ 00:35:35.929 END TEST bdev_nbd 00:35:35.929 ************************************ 00:35:35.929 09:04:37 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:35:35.929 09:04:37 -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:35:35.929 09:04:37 -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:35:35.929 09:04:37 -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:35:35.929 skipping fio tests on NVMe due to multi-ns failures. 00:35:35.929 09:04:37 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:35.929 09:04:37 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:35:35.929 09:04:37 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:35:35.929 09:04:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:35.929 09:04:37 -- common/autotest_common.sh@10 -- # set +x 00:35:35.929 ************************************ 00:35:35.929 START TEST bdev_verify 00:35:35.929 ************************************ 00:35:35.929 09:04:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:35:35.929 [2024-04-18 09:04:37.874560] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:35:35.929 [2024-04-18 09:04:37.875138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68780 ] 00:35:36.192 [2024-04-18 09:04:38.045507] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:36.450 [2024-04-18 09:04:38.318249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:36.450 [2024-04-18 09:04:38.318260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:37.385 Running I/O for 5 seconds... 00:35:42.649 00:35:42.649 Latency(us) 00:35:42.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.649 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:42.649 Verification LBA range: start 0x0 length 0x5e800 00:35:42.649 Nvme0n1p1 : 5.05 1316.89 5.14 0.00 0.00 96893.68 20472.20 116342.00 00:35:42.649 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:42.649 Verification LBA range: start 0x5e800 length 0x5e800 00:35:42.649 Nvme0n1p1 : 5.07 1287.11 5.03 0.00 0.00 99100.14 19348.72 112347.43 00:35:42.649 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:42.649 Verification LBA range: start 0x0 length 0x5e7ff 00:35:42.649 Nvme0n1p2 : 5.06 1316.38 5.14 0.00 0.00 96783.95 23343.30 107853.53 00:35:42.649 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:42.649 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:35:42.649 Nvme0n1p2 : 5.07 1286.58 5.03 0.00 0.00 98857.76 22344.66 107853.53 00:35:42.649 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:42.649 Verification LBA range: start 0x0 length 0xa0000 00:35:42.649 Nvme1n1 : 5.06 1315.97 5.14 0.00 0.00 96695.20 25590.25 100363.70 00:35:42.649 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:42.649 Verification LBA range: start 0xa0000 length 0xa0000 00:35:42.649 Nvme1n1 : 5.08 1285.97 5.02 0.00 0.00 98643.62 25715.08 102860.31 00:35:42.649 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:42.649 Verification LBA range: start 0x0 length 0x80000 00:35:42.649 Nvme2n1 : 5.06 1315.59 5.14 0.00 0.00 96492.80 25465.42 104857.60 00:35:42.649 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:42.649 Verification LBA range: start 0x80000 length 0x80000 00:35:42.649 Nvme2n1 : 5.08 1284.82 5.02 0.00 0.00 98487.63 27213.04 97367.77 00:35:42.650 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:42.650 Verification LBA range: start 0x0 length 0x80000 00:35:42.650 Nvme2n2 : 5.08 1323.66 5.17 0.00 0.00 95706.62 6116.69 110849.46 00:35:42.650 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:42.650 Verification LBA range: start 0x80000 length 0x80000 00:35:42.650 Nvme2n2 : 5.10 1292.59 5.05 0.00 0.00 97792.70 4618.73 98366.42 00:35:42.650 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:42.650 Verification LBA range: start 0x0 length 0x80000 00:35:42.650 Nvme2n3 : 5.08 1322.57 5.17 0.00 0.00 95567.93 10298.51 114344.72 00:35:42.650 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:42.650 Verification LBA range: start 0x80000 length 0x80000 00:35:42.650 Nvme2n3 : 5.10 1292.14 5.05 0.00 0.00 97616.35 4805.97 104358.28 00:35:42.650 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:42.650 Verification LBA range: start 0x0 length 0x20000 00:35:42.650 Nvme3n1 : 5.09 1331.85 5.20 0.00 0.00 94866.93 7489.83 117340.65 00:35:42.650 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:42.650 Verification LBA range: start 0x20000 length 0x20000 00:35:42.650 Nvme3n1 : 5.11 1301.99 5.09 0.00 0.00 96860.44 7458.62 111348.78 00:35:42.650 =================================================================================================================== 00:35:42.650 Total : 18274.12 71.38 0.00 0.00 97154.11 4618.73 117340.65 00:35:44.027 00:35:44.027 real 0m8.241s 00:35:44.027 user 0m14.910s 00:35:44.027 sys 0m0.326s 00:35:44.027 09:04:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:44.027 ************************************ 00:35:44.027 END TEST bdev_verify 00:35:44.027 ************************************ 00:35:44.027 09:04:46 -- common/autotest_common.sh@10 -- # set +x 00:35:44.027 09:04:46 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:35:44.027 09:04:46 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:35:44.027 09:04:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:44.027 09:04:46 -- common/autotest_common.sh@10 -- # set +x 00:35:44.286 ************************************ 00:35:44.286 START TEST bdev_verify_big_io 00:35:44.286 ************************************ 00:35:44.286 09:04:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:35:44.286 [2024-04-18 09:04:46.283830] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:35:44.286 [2024-04-18 09:04:46.284002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68894 ] 00:35:44.544 [2024-04-18 09:04:46.467725] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:44.803 [2024-04-18 09:04:46.720842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:44.803 [2024-04-18 09:04:46.721269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.740 Running I/O for 5 seconds... 00:35:52.303 00:35:52.303 Latency(us) 00:35:52.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:52.303 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:52.303 Verification LBA range: start 0x0 length 0x5e80 00:35:52.303 Nvme0n1p1 : 5.89 76.01 4.75 0.00 0.00 1627561.06 26838.55 1358155.58 00:35:52.303 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:52.303 Verification LBA range: start 0x5e80 length 0x5e80 00:35:52.303 Nvme0n1p1 : 6.08 68.38 4.27 0.00 0.00 1724115.33 176759.95 2476636.65 00:35:52.303 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:52.303 Verification LBA range: start 0x0 length 0x5e7f 00:35:52.303 Nvme0n1p2 : 6.02 74.46 4.65 0.00 0.00 1626255.78 138811.49 1206361.72 00:35:52.303 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:52.303 Verification LBA range: start 0x5e7f length 0x5e7f 00:35:52.303 Nvme0n1p2 : 6.02 77.69 4.86 0.00 0.00 1486214.05 45937.62 2125114.03 00:35:52.303 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:52.303 Verification LBA range: start 0x0 length 0xa000 00:35:52.303 Nvme1n1 : 6.02 80.08 5.01 0.00 0.00 1470007.83 70903.71 1294242.38 00:35:52.303 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:52.303 Verification LBA range: start 0xa000 length 0xa000 00:35:52.303 Nvme1n1 : 6.09 79.86 4.99 0.00 0.00 1384218.44 47185.92 1749623.95 00:35:52.303 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:52.303 Verification LBA range: start 0x0 length 0x8000 00:35:52.303 Nvme2n1 : 6.02 78.72 4.92 0.00 0.00 1452172.42 71902.35 1637775.85 00:35:52.303 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:52.303 Verification LBA range: start 0x8000 length 0x8000 00:35:52.303 Nvme2n1 : 6.09 84.09 5.26 0.00 0.00 1267554.01 58919.98 1781580.56 00:35:52.303 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:52.303 Verification LBA range: start 0x0 length 0x8000 00:35:52.303 Nvme2n2 : 6.02 71.90 4.49 0.00 0.00 1534421.98 71902.35 3179681.89 00:35:52.303 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:52.303 Verification LBA range: start 0x8000 length 0x8000 00:35:52.303 Nvme2n2 : 6.20 103.28 6.46 0.00 0.00 996269.25 3994.58 1813537.16 00:35:52.303 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:52.303 Verification LBA range: start 0x0 length 0x8000 00:35:52.303 Nvme2n3 : 6.05 81.95 5.12 0.00 0.00 1291996.81 27088.21 3211638.49 00:35:52.303 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:52.303 Verification LBA range: start 0x8000 length 0x8000 00:35:52.303 Nvme2n3 : 6.01 66.17 4.14 0.00 0.00 1859357.78 45438.29 2732289.46 00:35:52.303 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:52.303 Verification LBA range: start 0x0 length 0x2000 00:35:52.303 Nvme3n1 : 6.21 111.42 6.96 0.00 0.00 910816.67 1131.28 3275551.70 00:35:52.303 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:52.303 Verification LBA range: start 0x2000 length 0x2000 00:35:52.303 Nvme3n1 : 6.02 74.95 4.68 0.00 0.00 1596014.38 68407.10 1677721.60 00:35:52.303 =================================================================================================================== 00:35:52.303 Total : 1128.97 70.56 0.00 0.00 1405908.96 1131.28 3275551.70 00:35:54.831 00:35:54.831 real 0m10.196s 00:35:54.831 user 0m18.688s 00:35:54.831 sys 0m0.388s 00:35:54.831 09:04:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:54.831 09:04:56 -- common/autotest_common.sh@10 -- # set +x 00:35:54.831 ************************************ 00:35:54.831 END TEST bdev_verify_big_io 00:35:54.831 ************************************ 00:35:54.831 09:04:56 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:54.831 09:04:56 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:35:54.831 09:04:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:54.831 09:04:56 -- common/autotest_common.sh@10 -- # set +x 00:35:54.831 ************************************ 00:35:54.831 START TEST bdev_write_zeroes 00:35:54.831 ************************************ 00:35:54.831 09:04:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:54.831 [2024-04-18 09:04:56.615881] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:35:54.831 [2024-04-18 09:04:56.616052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69027 ] 00:35:54.831 [2024-04-18 09:04:56.806174] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.089 [2024-04-18 09:04:57.173717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:56.024 Running I/O for 1 seconds... 00:35:56.960 00:35:56.960 Latency(us) 00:35:56.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.960 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:56.960 Nvme0n1p1 : 1.03 6238.72 24.37 0.00 0.00 20422.93 9799.19 42941.68 00:35:56.960 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:56.960 Nvme0n1p2 : 1.03 6228.06 24.33 0.00 0.00 20419.76 14417.92 42941.68 00:35:56.960 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:56.960 Nvme1n1 : 1.03 6218.66 24.29 0.00 0.00 20369.39 15042.07 42192.70 00:35:56.960 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:56.960 Nvme2n1 : 1.03 6259.05 24.45 0.00 0.00 20121.32 11297.16 41443.72 00:35:56.960 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:56.960 Nvme2n2 : 1.03 6249.65 24.41 0.00 0.00 20110.56 11109.91 41693.38 00:35:56.960 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:56.960 Nvme2n3 : 1.04 6240.41 24.38 0.00 0.00 20097.40 10735.42 41443.72 00:35:56.960 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:56.960 Nvme3n1 : 1.04 6231.02 24.34 0.00 0.00 20087.25 10548.18 41443.72 00:35:56.960 =================================================================================================================== 00:35:56.960 Total : 43665.57 170.57 0.00 0.00 20231.93 9799.19 42941.68 00:35:58.879 00:35:58.879 real 0m4.085s 00:35:58.879 user 0m3.605s 00:35:58.879 sys 0m0.349s 00:35:58.879 09:05:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:58.879 09:05:00 -- common/autotest_common.sh@10 -- # set +x 00:35:58.879 ************************************ 00:35:58.879 END TEST bdev_write_zeroes 00:35:58.879 ************************************ 00:35:58.879 09:05:00 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:58.879 09:05:00 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:35:58.879 09:05:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:58.879 09:05:00 -- common/autotest_common.sh@10 -- # set +x 00:35:58.879 ************************************ 00:35:58.879 START TEST bdev_json_nonenclosed 00:35:58.879 ************************************ 00:35:58.879 09:05:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:58.879 [2024-04-18 09:05:00.843114] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:35:58.879 [2024-04-18 09:05:00.843342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69095 ] 00:35:59.138 [2024-04-18 09:05:01.033138] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.397 [2024-04-18 09:05:01.290388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:59.397 [2024-04-18 09:05:01.290493] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:35:59.397 [2024-04-18 09:05:01.290523] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:35:59.397 [2024-04-18 09:05:01.290537] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:59.964 00:35:59.964 real 0m1.073s 00:35:59.964 user 0m0.761s 00:35:59.964 sys 0m0.202s 00:35:59.964 09:05:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:59.964 09:05:01 -- common/autotest_common.sh@10 -- # set +x 00:35:59.964 ************************************ 00:35:59.964 END TEST bdev_json_nonenclosed 00:35:59.964 ************************************ 00:35:59.964 09:05:01 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:59.964 09:05:01 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:35:59.964 09:05:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:59.964 09:05:01 -- common/autotest_common.sh@10 -- # set +x 00:35:59.964 ************************************ 00:35:59.964 START TEST bdev_json_nonarray 00:35:59.964 ************************************ 00:35:59.964 09:05:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:59.964 [2024-04-18 09:05:02.012272] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:35:59.964 [2024-04-18 09:05:02.012437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69130 ] 00:36:00.222 [2024-04-18 09:05:02.184161] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:00.481 [2024-04-18 09:05:02.515699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:00.481 [2024-04-18 09:05:02.515813] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:36:00.481 [2024-04-18 09:05:02.515839] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:36:00.481 [2024-04-18 09:05:02.515853] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:01.048 00:36:01.048 real 0m1.076s 00:36:01.048 user 0m0.817s 00:36:01.048 sys 0m0.151s 00:36:01.048 09:05:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:01.048 09:05:03 -- common/autotest_common.sh@10 -- # set +x 00:36:01.048 ************************************ 00:36:01.048 END TEST bdev_json_nonarray 00:36:01.048 ************************************ 00:36:01.048 09:05:03 -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:36:01.048 09:05:03 -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:36:01.048 09:05:03 -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:36:01.048 09:05:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:36:01.048 09:05:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:01.048 09:05:03 -- common/autotest_common.sh@10 -- # set +x 00:36:01.048 ************************************ 00:36:01.048 START TEST bdev_gpt_uuid 00:36:01.048 ************************************ 00:36:01.048 09:05:03 -- common/autotest_common.sh@1111 -- # bdev_gpt_uuid 00:36:01.048 09:05:03 -- bdev/blockdev.sh@614 -- # local bdev 00:36:01.048 09:05:03 -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:36:01.048 09:05:03 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69165 00:36:01.048 09:05:03 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:36:01.048 09:05:03 -- bdev/blockdev.sh@49 -- # waitforlisten 69165 00:36:01.048 09:05:03 -- common/autotest_common.sh@817 -- # '[' -z 69165 ']' 00:36:01.048 09:05:03 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:36:01.048 09:05:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.048 09:05:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:36:01.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.049 09:05:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.049 09:05:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:36:01.049 09:05:03 -- common/autotest_common.sh@10 -- # set +x 00:36:01.307 [2024-04-18 09:05:03.269188] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:36:01.307 [2024-04-18 09:05:03.269355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69165 ] 00:36:01.566 [2024-04-18 09:05:03.456112] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:01.824 [2024-04-18 09:05:03.805517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:03.247 09:05:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:03.247 09:05:04 -- common/autotest_common.sh@850 -- # return 0 00:36:03.247 09:05:04 -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:36:03.247 09:05:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:03.247 09:05:04 -- common/autotest_common.sh@10 -- # set +x 00:36:03.247 Some configs were skipped because the RPC state that can call them passed over. 00:36:03.247 09:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:03.247 09:05:05 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:36:03.247 09:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:03.247 09:05:05 -- common/autotest_common.sh@10 -- # set +x 00:36:03.247 09:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:03.247 09:05:05 -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:36:03.247 09:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:03.247 09:05:05 -- common/autotest_common.sh@10 -- # set +x 00:36:03.247 09:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:03.247 09:05:05 -- bdev/blockdev.sh@621 -- # bdev='[ 00:36:03.247 { 00:36:03.247 "name": "Nvme0n1p1", 00:36:03.247 "aliases": [ 00:36:03.247 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:36:03.247 ], 00:36:03.247 "product_name": "GPT Disk", 00:36:03.247 "block_size": 4096, 00:36:03.247 "num_blocks": 774144, 00:36:03.247 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:36:03.248 "md_size": 64, 00:36:03.248 "md_interleave": false, 00:36:03.248 "dif_type": 0, 00:36:03.248 "assigned_rate_limits": { 00:36:03.248 "rw_ios_per_sec": 0, 00:36:03.248 "rw_mbytes_per_sec": 0, 00:36:03.248 "r_mbytes_per_sec": 0, 00:36:03.248 "w_mbytes_per_sec": 0 00:36:03.248 }, 00:36:03.248 "claimed": false, 00:36:03.248 "zoned": false, 00:36:03.248 "supported_io_types": { 00:36:03.248 "read": true, 00:36:03.248 "write": true, 00:36:03.248 "unmap": true, 00:36:03.248 "write_zeroes": true, 00:36:03.248 "flush": true, 00:36:03.248 "reset": true, 00:36:03.248 "compare": true, 00:36:03.248 "compare_and_write": false, 00:36:03.248 "abort": true, 00:36:03.248 "nvme_admin": false, 00:36:03.248 "nvme_io": false 00:36:03.248 }, 00:36:03.248 "driver_specific": { 00:36:03.248 "gpt": { 00:36:03.248 "base_bdev": "Nvme0n1", 00:36:03.248 "offset_blocks": 256, 00:36:03.248 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:36:03.248 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:36:03.248 "partition_name": "SPDK_TEST_first" 00:36:03.248 } 00:36:03.248 } 00:36:03.248 } 00:36:03.248 ]' 00:36:03.248 09:05:05 -- bdev/blockdev.sh@622 -- # jq -r length 00:36:03.248 09:05:05 -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:36:03.248 09:05:05 -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:36:03.506 09:05:05 -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:36:03.506 09:05:05 -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:36:03.506 09:05:05 -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:36:03.506 09:05:05 -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:36:03.506 09:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:03.506 09:05:05 -- common/autotest_common.sh@10 -- # set +x 00:36:03.506 09:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:03.506 09:05:05 -- bdev/blockdev.sh@626 -- # bdev='[ 00:36:03.506 { 00:36:03.506 "name": "Nvme0n1p2", 00:36:03.506 "aliases": [ 00:36:03.506 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:36:03.506 ], 00:36:03.506 "product_name": "GPT Disk", 00:36:03.506 "block_size": 4096, 00:36:03.506 "num_blocks": 774143, 00:36:03.506 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:36:03.506 "md_size": 64, 00:36:03.506 "md_interleave": false, 00:36:03.506 "dif_type": 0, 00:36:03.506 "assigned_rate_limits": { 00:36:03.506 "rw_ios_per_sec": 0, 00:36:03.506 "rw_mbytes_per_sec": 0, 00:36:03.506 "r_mbytes_per_sec": 0, 00:36:03.506 "w_mbytes_per_sec": 0 00:36:03.506 }, 00:36:03.506 "claimed": false, 00:36:03.506 "zoned": false, 00:36:03.506 "supported_io_types": { 00:36:03.506 "read": true, 00:36:03.506 "write": true, 00:36:03.506 "unmap": true, 00:36:03.506 "write_zeroes": true, 00:36:03.506 "flush": true, 00:36:03.506 "reset": true, 00:36:03.506 "compare": true, 00:36:03.506 "compare_and_write": false, 00:36:03.506 "abort": true, 00:36:03.506 "nvme_admin": false, 00:36:03.506 "nvme_io": false 00:36:03.506 }, 00:36:03.506 "driver_specific": { 00:36:03.506 "gpt": { 00:36:03.506 "base_bdev": "Nvme0n1", 00:36:03.506 "offset_blocks": 774400, 00:36:03.506 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:36:03.506 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:36:03.506 "partition_name": "SPDK_TEST_second" 00:36:03.506 } 00:36:03.506 } 00:36:03.506 } 00:36:03.506 ]' 00:36:03.506 09:05:05 -- bdev/blockdev.sh@627 -- # jq -r length 00:36:03.506 09:05:05 -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:36:03.506 09:05:05 -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:36:03.506 09:05:05 -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:36:03.506 09:05:05 -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:36:03.506 09:05:05 -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:36:03.506 09:05:05 -- bdev/blockdev.sh@631 -- # killprocess 69165 00:36:03.506 09:05:05 -- common/autotest_common.sh@936 -- # '[' -z 69165 ']' 00:36:03.506 09:05:05 -- common/autotest_common.sh@940 -- # kill -0 69165 00:36:03.506 09:05:05 -- common/autotest_common.sh@941 -- # uname 00:36:03.506 09:05:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:36:03.506 09:05:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69165 00:36:03.765 09:05:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:36:03.765 09:05:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:36:03.765 killing process with pid 69165 00:36:03.765 09:05:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69165' 00:36:03.765 09:05:05 -- common/autotest_common.sh@955 -- # kill 69165 00:36:03.765 09:05:05 -- common/autotest_common.sh@960 -- # wait 69165 00:36:06.298 00:36:06.298 real 0m5.245s 00:36:06.298 user 0m5.437s 00:36:06.298 sys 0m0.571s 00:36:06.298 09:05:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:06.298 09:05:08 -- common/autotest_common.sh@10 -- # set +x 00:36:06.298 ************************************ 00:36:06.298 END TEST bdev_gpt_uuid 00:36:06.298 ************************************ 00:36:06.557 09:05:08 -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:36:06.557 09:05:08 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:36:06.557 09:05:08 -- bdev/blockdev.sh@811 -- # cleanup 00:36:06.557 09:05:08 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:36:06.557 09:05:08 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:36:06.557 09:05:08 -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:36:06.557 09:05:08 -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:36:06.557 09:05:08 -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:36:06.557 09:05:08 -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:06.815 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:07.074 Waiting for block devices as requested 00:36:07.074 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:07.332 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:07.332 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:07.590 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:12.860 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:12.860 09:05:14 -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:36:12.860 09:05:14 -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:36:12.860 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:36:12.860 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:36:12.860 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:36:12.860 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:36:12.860 09:05:14 -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:36:12.860 00:36:12.860 real 1m13.568s 00:36:12.860 user 1m31.136s 00:36:12.860 sys 0m12.414s 00:36:12.860 09:05:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:12.860 09:05:14 -- common/autotest_common.sh@10 -- # set +x 00:36:12.860 ************************************ 00:36:12.860 END TEST blockdev_nvme_gpt 00:36:12.860 ************************************ 00:36:12.860 09:05:14 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:36:12.860 09:05:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:36:12.860 09:05:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:12.860 09:05:14 -- common/autotest_common.sh@10 -- # set +x 00:36:13.118 ************************************ 00:36:13.118 START TEST nvme 00:36:13.118 ************************************ 00:36:13.118 09:05:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:36:13.118 * Looking for test storage... 00:36:13.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:36:13.118 09:05:15 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:13.685 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:14.251 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:36:14.510 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:36:14.510 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:36:14.510 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:36:14.510 09:05:16 -- nvme/nvme.sh@79 -- # uname 00:36:14.510 09:05:16 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:36:14.510 09:05:16 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:36:14.510 09:05:16 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:36:14.510 09:05:16 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:36:14.510 09:05:16 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:36:14.510 09:05:16 -- common/autotest_common.sh@1055 -- # echo 0 00:36:14.510 09:05:16 -- common/autotest_common.sh@1057 -- # stubpid=69829 00:36:14.510 Waiting for stub to ready for secondary processes... 00:36:14.510 09:05:16 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:36:14.510 09:05:16 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:36:14.510 09:05:16 -- common/autotest_common.sh@1061 -- # [[ -e /proc/69829 ]] 00:36:14.510 09:05:16 -- common/autotest_common.sh@1062 -- # sleep 1s 00:36:14.510 09:05:16 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:36:14.795 [2024-04-18 09:05:16.641637] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:36:14.795 [2024-04-18 09:05:16.641801] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:36:15.731 09:05:17 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:36:15.731 09:05:17 -- common/autotest_common.sh@1061 -- # [[ -e /proc/69829 ]] 00:36:15.731 09:05:17 -- common/autotest_common.sh@1062 -- # sleep 1s 00:36:15.731 [2024-04-18 09:05:17.754512] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:15.990 [2024-04-18 09:05:18.085526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:15.990 [2024-04-18 09:05:18.085606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:15.990 [2024-04-18 09:05:18.085581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.249 [2024-04-18 09:05:18.113361] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:36:16.249 [2024-04-18 09:05:18.113455] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:36:16.249 [2024-04-18 09:05:18.127591] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:36:16.249 [2024-04-18 09:05:18.127837] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:36:16.249 [2024-04-18 09:05:18.132457] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:36:16.249 [2024-04-18 09:05:18.133560] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:36:16.249 [2024-04-18 09:05:18.133915] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:36:16.249 [2024-04-18 09:05:18.140595] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:36:16.249 [2024-04-18 09:05:18.141137] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:36:16.249 [2024-04-18 09:05:18.141450] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:36:16.249 [2024-04-18 09:05:18.146009] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:36:16.249 [2024-04-18 09:05:18.146492] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:36:16.249 [2024-04-18 09:05:18.146741] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:36:16.249 [2024-04-18 09:05:18.147032] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:36:16.249 [2024-04-18 09:05:18.147402] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:36:16.507 09:05:18 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:36:16.507 done. 00:36:16.507 09:05:18 -- common/autotest_common.sh@1064 -- # echo done. 00:36:16.507 09:05:18 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:36:16.507 09:05:18 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:36:16.507 09:05:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:16.507 09:05:18 -- common/autotest_common.sh@10 -- # set +x 00:36:16.766 ************************************ 00:36:16.766 START TEST nvme_reset 00:36:16.766 ************************************ 00:36:16.766 09:05:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:36:17.026 Initializing NVMe Controllers 00:36:17.026 Skipping QEMU NVMe SSD at 0000:00:10.0 00:36:17.026 Skipping QEMU NVMe SSD at 0000:00:11.0 00:36:17.026 Skipping QEMU NVMe SSD at 0000:00:13.0 00:36:17.026 Skipping QEMU NVMe SSD at 0000:00:12.0 00:36:17.026 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:36:17.026 00:36:17.026 real 0m0.384s 00:36:17.026 user 0m0.118s 00:36:17.026 sys 0m0.193s 00:36:17.026 09:05:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:17.026 09:05:19 -- common/autotest_common.sh@10 -- # set +x 00:36:17.026 ************************************ 00:36:17.026 END TEST nvme_reset 00:36:17.026 ************************************ 00:36:17.026 09:05:19 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:36:17.026 09:05:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:36:17.026 09:05:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:17.026 09:05:19 -- common/autotest_common.sh@10 -- # set +x 00:36:17.285 ************************************ 00:36:17.285 START TEST nvme_identify 00:36:17.285 ************************************ 00:36:17.285 09:05:19 -- common/autotest_common.sh@1111 -- # nvme_identify 00:36:17.285 09:05:19 -- nvme/nvme.sh@12 -- # bdfs=() 00:36:17.285 09:05:19 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:36:17.285 09:05:19 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:36:17.285 09:05:19 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:36:17.285 09:05:19 -- common/autotest_common.sh@1499 -- # bdfs=() 00:36:17.285 09:05:19 -- common/autotest_common.sh@1499 -- # local bdfs 00:36:17.285 09:05:19 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:17.285 09:05:19 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:36:17.285 09:05:19 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:36:17.285 09:05:19 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:36:17.285 09:05:19 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:36:17.285 09:05:19 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:36:17.547 [2024-04-18 09:05:19.525697] nvme_ctrlr.c:3485:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 69866 terminated unexpected 00:36:17.547 ===================================================== 00:36:17.547 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:36:17.547 ===================================================== 00:36:17.547 Controller Capabilities/Features 00:36:17.547 ================================ 00:36:17.547 Vendor ID: 1b36 00:36:17.547 Subsystem Vendor ID: 1af4 00:36:17.547 Serial Number: 12340 00:36:17.547 Model Number: QEMU NVMe Ctrl 00:36:17.547 Firmware Version: 8.0.0 00:36:17.547 Recommended Arb Burst: 6 00:36:17.547 IEEE OUI Identifier: 00 54 52 00:36:17.547 Multi-path I/O 00:36:17.547 May have multiple subsystem ports: No 00:36:17.547 May have multiple controllers: No 00:36:17.547 Associated with SR-IOV VF: No 00:36:17.547 Max Data Transfer Size: 524288 00:36:17.547 Max Number of Namespaces: 256 00:36:17.547 Max Number of I/O Queues: 64 00:36:17.547 NVMe Specification Version (VS): 1.4 00:36:17.547 NVMe Specification Version (Identify): 1.4 00:36:17.547 Maximum Queue Entries: 2048 00:36:17.547 Contiguous Queues Required: Yes 00:36:17.547 Arbitration Mechanisms Supported 00:36:17.547 Weighted Round Robin: Not Supported 00:36:17.547 Vendor Specific: Not Supported 00:36:17.547 Reset Timeout: 7500 ms 00:36:17.547 Doorbell Stride: 4 bytes 00:36:17.547 NVM Subsystem Reset: Not Supported 00:36:17.547 Command Sets Supported 00:36:17.547 NVM Command Set: Supported 00:36:17.547 Boot Partition: Not Supported 00:36:17.547 Memory Page Size Minimum: 4096 bytes 00:36:17.547 Memory Page Size Maximum: 65536 bytes 00:36:17.547 Persistent Memory Region: Not Supported 00:36:17.547 Optional Asynchronous Events Supported 00:36:17.547 Namespace Attribute Notices: Supported 00:36:17.547 Firmware Activation Notices: Not Supported 00:36:17.547 ANA Change Notices: Not Supported 00:36:17.547 PLE Aggregate Log Change Notices: Not Supported 00:36:17.547 LBA Status Info Alert Notices: Not Supported 00:36:17.547 EGE Aggregate Log Change Notices: Not Supported 00:36:17.547 Normal NVM Subsystem Shutdown event: Not Supported 00:36:17.547 Zone Descriptor Change Notices: Not Supported 00:36:17.547 Discovery Log Change Notices: Not Supported 00:36:17.547 Controller Attributes 00:36:17.547 128-bit Host Identifier: Not Supported 00:36:17.547 Non-Operational Permissive Mode: Not Supported 00:36:17.547 NVM Sets: Not Supported 00:36:17.547 Read Recovery Levels: Not Supported 00:36:17.547 Endurance Groups: Not Supported 00:36:17.547 Predictable Latency Mode: Not Supported 00:36:17.547 Traffic Based Keep ALive: Not Supported 00:36:17.547 Namespace Granularity: Not Supported 00:36:17.547 SQ Associations: Not Supported 00:36:17.547 UUID List: Not Supported 00:36:17.547 Multi-Domain Subsystem: Not Supported 00:36:17.547 Fixed Capacity Management: Not Supported 00:36:17.547 Variable Capacity Management: Not Supported 00:36:17.547 Delete Endurance Group: Not Supported 00:36:17.547 Delete NVM Set: Not Supported 00:36:17.547 Extended LBA Formats Supported: Supported 00:36:17.547 Flexible Data Placement Supported: Not Supported 00:36:17.547 00:36:17.547 Controller Memory Buffer Support 00:36:17.547 ================================ 00:36:17.547 Supported: No 00:36:17.547 00:36:17.547 Persistent Memory Region Support 00:36:17.547 ================================ 00:36:17.547 Supported: No 00:36:17.547 00:36:17.547 Admin Command Set Attributes 00:36:17.547 ============================ 00:36:17.547 Security Send/Receive: Not Supported 00:36:17.547 Format NVM: Supported 00:36:17.547 Firmware Activate/Download: Not Supported 00:36:17.547 Namespace Management: Supported 00:36:17.547 Device Self-Test: Not Supported 00:36:17.547 Directives: Supported 00:36:17.547 NVMe-MI: Not Supported 00:36:17.547 Virtualization Management: Not Supported 00:36:17.547 Doorbell Buffer Config: Supported 00:36:17.547 Get LBA Status Capability: Not Supported 00:36:17.547 Command & Feature Lockdown Capability: Not Supported 00:36:17.547 Abort Command Limit: 4 00:36:17.547 Async Event Request Limit: 4 00:36:17.547 Number of Firmware Slots: N/A 00:36:17.547 Firmware Slot 1 Read-Only: N/A 00:36:17.547 Firmware Activation Without Reset: N/A 00:36:17.547 Multiple Update Detection Support: N/A 00:36:17.547 Firmware Update Granularity: No Information Provided 00:36:17.547 Per-Namespace SMART Log: Yes 00:36:17.547 Asymmetric Namespace Access Log Page: Not Supported 00:36:17.547 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:36:17.547 Command Effects Log Page: Supported 00:36:17.547 Get Log Page Extended Data: Supported 00:36:17.548 Telemetry Log Pages: Not Supported 00:36:17.548 Persistent Event Log Pages: Not Supported 00:36:17.548 Supported Log Pages Log Page: May Support 00:36:17.548 Commands Supported & Effects Log Page: Not Supported 00:36:17.548 Feature Identifiers & Effects Log Page:May Support 00:36:17.548 NVMe-MI Commands & Effects Log Page: May Support 00:36:17.548 Data Area 4 for Telemetry Log: Not Supported 00:36:17.548 Error Log Page Entries Supported: 1 00:36:17.548 Keep Alive: Not Supported 00:36:17.548 00:36:17.548 NVM Command Set Attributes 00:36:17.548 ========================== 00:36:17.548 Submission Queue Entry Size 00:36:17.548 Max: 64 00:36:17.548 Min: 64 00:36:17.548 Completion Queue Entry Size 00:36:17.548 Max: 16 00:36:17.548 Min: 16 00:36:17.548 Number of Namespaces: 256 00:36:17.548 Compare Command: Supported 00:36:17.548 Write Uncorrectable Command: Not Supported 00:36:17.548 Dataset Management Command: Supported 00:36:17.548 Write Zeroes Command: Supported 00:36:17.548 Set Features Save Field: Supported 00:36:17.548 Reservations: Not Supported 00:36:17.548 Timestamp: Supported 00:36:17.548 Copy: Supported 00:36:17.548 Volatile Write Cache: Present 00:36:17.548 Atomic Write Unit (Normal): 1 00:36:17.548 Atomic Write Unit (PFail): 1 00:36:17.548 Atomic Compare & Write Unit: 1 00:36:17.548 Fused Compare & Write: Not Supported 00:36:17.548 Scatter-Gather List 00:36:17.548 SGL Command Set: Supported 00:36:17.548 SGL Keyed: Not Supported 00:36:17.548 SGL Bit Bucket Descriptor: Not Supported 00:36:17.548 SGL Metadata Pointer: Not Supported 00:36:17.548 Oversized SGL: Not Supported 00:36:17.548 SGL Metadata Address: Not Supported 00:36:17.548 SGL Offset: Not Supported 00:36:17.548 Transport SGL Data Block: Not Supported 00:36:17.548 Replay Protected Memory Block: Not Supported 00:36:17.548 00:36:17.548 Firmware Slot Information 00:36:17.548 ========================= 00:36:17.548 Active slot: 1 00:36:17.548 Slot 1 Firmware Revision: 1.0 00:36:17.548 00:36:17.548 00:36:17.548 Commands Supported and Effects 00:36:17.548 ============================== 00:36:17.548 Admin Commands 00:36:17.548 -------------- 00:36:17.548 Delete I/O Submission Queue (00h): Supported 00:36:17.548 Create I/O Submission Queue (01h): Supported 00:36:17.548 Get Log Page (02h): Supported 00:36:17.548 Delete I/O Completion Queue (04h): Supported 00:36:17.548 Create I/O Completion Queue (05h): Supported 00:36:17.548 Identify (06h): Supported 00:36:17.548 Abort (08h): Supported 00:36:17.548 Set Features (09h): Supported 00:36:17.548 Get Features (0Ah): Supported 00:36:17.548 Asynchronous Event Request (0Ch): Supported 00:36:17.548 Namespace Attachment (15h): Supported NS-Inventory-Change 00:36:17.548 Directive Send (19h): Supported 00:36:17.548 Directive Receive (1Ah): Supported 00:36:17.548 Virtualization Management (1Ch): Supported 00:36:17.548 Doorbell Buffer Config (7Ch): Supported 00:36:17.548 Format NVM (80h): Supported LBA-Change 00:36:17.548 I/O Commands 00:36:17.548 ------------ 00:36:17.548 Flush (00h): Supported LBA-Change 00:36:17.548 Write (01h): Supported LBA-Change 00:36:17.548 Read (02h): Supported 00:36:17.548 Compare (05h): Supported 00:36:17.548 Write Zeroes (08h): Supported LBA-Change 00:36:17.548 Dataset Management (09h): Supported LBA-Change 00:36:17.548 Unknown (0Ch): Supported 00:36:17.548 Unknown (12h): Supported 00:36:17.548 Copy (19h): Supported LBA-Change 00:36:17.548 Unknown (1Dh): Supported LBA-Change 00:36:17.548 00:36:17.548 Error Log 00:36:17.548 ========= 00:36:17.548 00:36:17.548 Arbitration 00:36:17.548 =========== 00:36:17.548 Arbitration Burst: no limit 00:36:17.548 00:36:17.548 Power Management 00:36:17.548 ================ 00:36:17.548 Number of Power States: 1 00:36:17.548 Current Power State: Power State #0 00:36:17.548 Power State #0: 00:36:17.548 Max Power: 25.00 W 00:36:17.548 Non-Operational State: Operational 00:36:17.548 Entry Latency: 16 microseconds 00:36:17.548 Exit Latency: 4 microseconds 00:36:17.548 Relative Read Throughput: 0 00:36:17.548 Relative Read Latency: 0 00:36:17.548 Relative Write Throughput: 0 00:36:17.548 Relative Write Latency: 0 00:36:17.548 Idle Power[2024-04-18 09:05:19.527182] nvme_ctrlr.c:3485:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 69866 terminated unexpected 00:36:17.548 : Not Reported 00:36:17.548 Active Power: Not Reported 00:36:17.548 Non-Operational Permissive Mode: Not Supported 00:36:17.548 00:36:17.548 Health Information 00:36:17.548 ================== 00:36:17.548 Critical Warnings: 00:36:17.548 Available Spare Space: OK 00:36:17.548 Temperature: OK 00:36:17.548 Device Reliability: OK 00:36:17.548 Read Only: No 00:36:17.548 Volatile Memory Backup: OK 00:36:17.548 Current Temperature: 323 Kelvin (50 Celsius) 00:36:17.548 Temperature Threshold: 343 Kelvin (70 Celsius) 00:36:17.548 Available Spare: 0% 00:36:17.548 Available Spare Threshold: 0% 00:36:17.548 Life Percentage Used: 0% 00:36:17.548 Data Units Read: 928 00:36:17.548 Data Units Written: 761 00:36:17.548 Host Read Commands: 46845 00:36:17.548 Host Write Commands: 45311 00:36:17.548 Controller Busy Time: 0 minutes 00:36:17.548 Power Cycles: 0 00:36:17.548 Power On Hours: 0 hours 00:36:17.548 Unsafe Shutdowns: 0 00:36:17.548 Unrecoverable Media Errors: 0 00:36:17.548 Lifetime Error Log Entries: 0 00:36:17.548 Warning Temperature Time: 0 minutes 00:36:17.548 Critical Temperature Time: 0 minutes 00:36:17.548 00:36:17.548 Number of Queues 00:36:17.548 ================ 00:36:17.548 Number of I/O Submission Queues: 64 00:36:17.548 Number of I/O Completion Queues: 64 00:36:17.548 00:36:17.548 ZNS Specific Controller Data 00:36:17.548 ============================ 00:36:17.548 Zone Append Size Limit: 0 00:36:17.548 00:36:17.548 00:36:17.548 Active Namespaces 00:36:17.548 ================= 00:36:17.548 Namespace ID:1 00:36:17.548 Error Recovery Timeout: Unlimited 00:36:17.548 Command Set Identifier: NVM (00h) 00:36:17.548 Deallocate: Supported 00:36:17.548 Deallocated/Unwritten Error: Supported 00:36:17.548 Deallocated Read Value: All 0x00 00:36:17.548 Deallocate in Write Zeroes: Not Supported 00:36:17.548 Deallocated Guard Field: 0xFFFF 00:36:17.548 Flush: Supported 00:36:17.548 Reservation: Not Supported 00:36:17.548 Metadata Transferred as: Separate Metadata Buffer 00:36:17.548 Namespace Sharing Capabilities: Private 00:36:17.548 Size (in LBAs): 1548666 (5GiB) 00:36:17.548 Capacity (in LBAs): 1548666 (5GiB) 00:36:17.548 Utilization (in LBAs): 1548666 (5GiB) 00:36:17.548 Thin Provisioning: Not Supported 00:36:17.548 Per-NS Atomic Units: No 00:36:17.548 Maximum Single Source Range Length: 128 00:36:17.548 Maximum Copy Length: 128 00:36:17.548 Maximum Source Range Count: 128 00:36:17.548 NGUID/EUI64 Never Reused: No 00:36:17.548 Namespace Write Protected: No 00:36:17.548 Number of LBA Formats: 8 00:36:17.548 Current LBA Format: LBA Format #07 00:36:17.548 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:17.548 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:17.548 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:17.548 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:17.548 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:17.548 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:17.548 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:17.548 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:17.548 00:36:17.548 ===================================================== 00:36:17.548 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:36:17.548 ===================================================== 00:36:17.548 Controller Capabilities/Features 00:36:17.548 ================================ 00:36:17.548 Vendor ID: 1b36 00:36:17.548 Subsystem Vendor ID: 1af4 00:36:17.548 Serial Number: 12341 00:36:17.548 Model Number: QEMU NVMe Ctrl 00:36:17.548 Firmware Version: 8.0.0 00:36:17.548 Recommended Arb Burst: 6 00:36:17.548 IEEE OUI Identifier: 00 54 52 00:36:17.548 Multi-path I/O 00:36:17.548 May have multiple subsystem ports: No 00:36:17.548 May have multiple controllers: No 00:36:17.548 Associated with SR-IOV VF: No 00:36:17.548 Max Data Transfer Size: 524288 00:36:17.548 Max Number of Namespaces: 256 00:36:17.548 Max Number of I/O Queues: 64 00:36:17.549 NVMe Specification Version (VS): 1.4 00:36:17.549 NVMe Specification Version (Identify): 1.4 00:36:17.549 Maximum Queue Entries: 2048 00:36:17.549 Contiguous Queues Required: Yes 00:36:17.549 Arbitration Mechanisms Supported 00:36:17.549 Weighted Round Robin: Not Supported 00:36:17.549 Vendor Specific: Not Supported 00:36:17.549 Reset Timeout: 7500 ms 00:36:17.549 Doorbell Stride: 4 bytes 00:36:17.549 NVM Subsystem Reset: Not Supported 00:36:17.549 Command Sets Supported 00:36:17.549 NVM Command Set: Supported 00:36:17.549 Boot Partition: Not Supported 00:36:17.549 Memory Page Size Minimum: 4096 bytes 00:36:17.549 Memory Page Size Maximum: 65536 bytes 00:36:17.549 Persistent Memory Region: Not Supported 00:36:17.549 Optional Asynchronous Events Supported 00:36:17.549 Namespace Attribute Notices: Supported 00:36:17.549 Firmware Activation Notices: Not Supported 00:36:17.549 ANA Change Notices: Not Supported 00:36:17.549 PLE Aggregate Log Change Notices: Not Supported 00:36:17.549 LBA Status Info Alert Notices: Not Supported 00:36:17.549 EGE Aggregate Log Change Notices: Not Supported 00:36:17.549 Normal NVM Subsystem Shutdown event: Not Supported 00:36:17.549 Zone Descriptor Change Notices: Not Supported 00:36:17.549 Discovery Log Change Notices: Not Supported 00:36:17.549 Controller Attributes 00:36:17.549 128-bit Host Identifier: Not Supported 00:36:17.549 Non-Operational Permissive Mode: Not Supported 00:36:17.549 NVM Sets: Not Supported 00:36:17.549 Read Recovery Levels: Not Supported 00:36:17.549 Endurance Groups: Not Supported 00:36:17.549 Predictable Latency Mode: Not Supported 00:36:17.549 Traffic Based Keep ALive: Not Supported 00:36:17.549 Namespace Granularity: Not Supported 00:36:17.549 SQ Associations: Not Supported 00:36:17.549 UUID List: Not Supported 00:36:17.549 Multi-Domain Subsystem: Not Supported 00:36:17.549 Fixed Capacity Management: Not Supported 00:36:17.549 Variable Capacity Management: Not Supported 00:36:17.549 Delete Endurance Group: Not Supported 00:36:17.549 Delete NVM Set: Not Supported 00:36:17.549 Extended LBA Formats Supported: Supported 00:36:17.549 Flexible Data Placement Supported: Not Supported 00:36:17.549 00:36:17.549 Controller Memory Buffer Support 00:36:17.549 ================================ 00:36:17.549 Supported: No 00:36:17.549 00:36:17.549 Persistent Memory Region Support 00:36:17.549 ================================ 00:36:17.549 Supported: No 00:36:17.549 00:36:17.549 Admin Command Set Attributes 00:36:17.549 ============================ 00:36:17.549 Security Send/Receive: Not Supported 00:36:17.549 Format NVM: Supported 00:36:17.549 Firmware Activate/Download: Not Supported 00:36:17.549 Namespace Management: Supported 00:36:17.549 Device Self-Test: Not Supported 00:36:17.549 Directives: Supported 00:36:17.549 NVMe-MI: Not Supported 00:36:17.549 Virtualization Management: Not Supported 00:36:17.549 Doorbell Buffer Config: Supported 00:36:17.549 Get LBA Status Capability: Not Supported 00:36:17.549 Command & Feature Lockdown Capability: Not Supported 00:36:17.549 Abort Command Limit: 4 00:36:17.549 Async Event Request Limit: 4 00:36:17.549 Number of Firmware Slots: N/A 00:36:17.549 Firmware Slot 1 Read-Only: N/A 00:36:17.549 Firmware Activation Without Reset: N/A 00:36:17.549 Multiple Update Detection Support: N/A 00:36:17.549 Firmware Update Granularity: No Information Provided 00:36:17.549 Per-Namespace SMART Log: Yes 00:36:17.549 Asymmetric Namespace Access Log Page: Not Supported 00:36:17.549 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:36:17.549 Command Effects Log Page: Supported 00:36:17.549 Get Log Page Extended Data: Supported 00:36:17.549 Telemetry Log Pages: Not Supported 00:36:17.549 Persistent Event Log Pages: Not Supported 00:36:17.549 Supported Log Pages Log Page: May Support 00:36:17.549 Commands Supported & Effects Log Page: Not Supported 00:36:17.549 Feature Identifiers & Effects Log Page:May Support 00:36:17.549 NVMe-MI Commands & Effects Log Page: May Support 00:36:17.549 Data Area 4 for Telemetry Log: Not Supported 00:36:17.549 Error Log Page Entries Supported: 1 00:36:17.549 Keep Alive: Not Supported 00:36:17.549 00:36:17.549 NVM Command Set Attributes 00:36:17.549 ========================== 00:36:17.549 Submission Queue Entry Size 00:36:17.549 Max: 64 00:36:17.549 Min: 64 00:36:17.549 Completion Queue Entry Size 00:36:17.549 Max: 16 00:36:17.549 Min: 16 00:36:17.549 Number of Namespaces: 256 00:36:17.549 Compare Command: Supported 00:36:17.549 Write Uncorrectable Command: Not Supported 00:36:17.549 Dataset Management Command: Supported 00:36:17.549 Write Zeroes Command: Supported 00:36:17.549 Set Features Save Field: Supported 00:36:17.549 Reservations: Not Supported 00:36:17.549 Timestamp: Supported 00:36:17.549 Copy: Supported 00:36:17.549 Volatile Write Cache: Present 00:36:17.549 Atomic Write Unit (Normal): 1 00:36:17.549 Atomic Write Unit (PFail): 1 00:36:17.549 Atomic Compare & Write Unit: 1 00:36:17.549 Fused Compare & Write: Not Supported 00:36:17.549 Scatter-Gather List 00:36:17.549 SGL Command Set: Supported 00:36:17.549 SGL Keyed: Not Supported 00:36:17.549 SGL Bit Bucket Descriptor: Not Supported 00:36:17.549 SGL Metadata Pointer: Not Supported 00:36:17.549 Oversized SGL: Not Supported 00:36:17.549 SGL Metadata Address: Not Supported 00:36:17.549 SGL Offset: Not Supported 00:36:17.549 Transport SGL Data Block: Not Supported 00:36:17.549 Replay Protected Memory Block: Not Supported 00:36:17.549 00:36:17.549 Firmware Slot Information 00:36:17.549 ========================= 00:36:17.549 Active slot: 1 00:36:17.549 Slot 1 Firmware Revision: 1.0 00:36:17.549 00:36:17.549 00:36:17.549 Commands Supported and Effects 00:36:17.549 ============================== 00:36:17.549 Admin Commands 00:36:17.549 -------------- 00:36:17.549 Delete I/O Submission Queue (00h): Supported 00:36:17.549 Create I/O Submission Queue (01h): Supported 00:36:17.549 Get Log Page (02h): Supported 00:36:17.549 Delete I/O Completion Queue (04h): Supported 00:36:17.549 Create I/O Completion Queue (05h): Supported 00:36:17.549 Identify (06h): Supported 00:36:17.549 Abort (08h): Supported 00:36:17.549 Set Features (09h): Supported 00:36:17.549 Get Features (0Ah): Supported 00:36:17.549 Asynchronous Event Request (0Ch): Supported 00:36:17.549 Namespace Attachment (15h): Supported NS-Inventory-Change 00:36:17.549 Directive Send (19h): Supported 00:36:17.549 Directive Receive (1Ah): Supported 00:36:17.549 Virtualization Management (1Ch): Supported 00:36:17.549 Doorbell Buffer Config (7Ch): Supported 00:36:17.549 Format NVM (80h): Supported LBA-Change 00:36:17.549 I/O Commands 00:36:17.549 ------------ 00:36:17.549 Flush (00h): Supported LBA-Change 00:36:17.549 Write (01h): Supported LBA-Change 00:36:17.549 Read (02h): Supported 00:36:17.549 Compare (05h): Supported 00:36:17.549 Write Zeroes (08h): Supported LBA-Change 00:36:17.549 Dataset Management (09h): Supported LBA-Change 00:36:17.549 Unknown (0Ch): Supported 00:36:17.549 Unknown (12h): Supported 00:36:17.549 Copy (19h): Supported LBA-Change 00:36:17.549 Unknown (1Dh): Supported LBA-Change 00:36:17.549 00:36:17.549 Error Log 00:36:17.549 ========= 00:36:17.549 00:36:17.549 Arbitration 00:36:17.549 =========== 00:36:17.549 Arbitration Burst: no limit 00:36:17.549 00:36:17.549 Power Management 00:36:17.549 ================ 00:36:17.549 Number of Power States: 1 00:36:17.549 Current Power State: Power State #0 00:36:17.549 Power State #0: 00:36:17.549 Max Power: 25.00 W 00:36:17.549 Non-Operational State: Operational 00:36:17.549 Entry Latency: 16 microseconds 00:36:17.549 Exit Latency: 4 microseconds 00:36:17.549 Relative Read Throughput: 0 00:36:17.549 Relative Read Latency: 0 00:36:17.549 Relative Write Throughput: 0 00:36:17.549 Relative Write Latency: 0 00:36:17.549 Idle Power: Not Reported 00:36:17.549 Active Power: Not Reported 00:36:17.549 Non-Operational Permissive Mode: Not Supported 00:36:17.549 00:36:17.549 Health Information 00:36:17.549 ================== 00:36:17.549 Critical Warnings: 00:36:17.549 Available Spare Space: OK 00:36:17.549 Temperature: OK 00:36:17.549 Device Reliability: OK 00:36:17.549 Read Only: No 00:36:17.549 Volatile Memory Backup: OK 00:36:17.549 Current Temperature: 323 Kelvin (50 Celsius) 00:36:17.549 Temperature Threshold: 343 Kelvin (70 Celsius) 00:36:17.549 Available Spare: 0% 00:36:17.549 Available Spare Threshold: 0% 00:36:17.549 Life Percentage Used: 0% 00:36:17.549 Data Units Read: 717 00:36:17.550 Data Units Written: 563 00:36:17.550 Host Read Commands: 33233 00:36:17.550 Host Write Commands: 30910 00:36:17.550 Controller Busy Time: 0 minutes 00:36:17.550 Power Cycles: 0 00:36:17.550 Power On Hours: 0 hours 00:36:17.550 Unsafe Shutdowns: 0 00:36:17.550 Unrecoverable Media Errors: 0 00:36:17.550 Lifetime Error Log Entries: 0 00:36:17.550 Warning Temperature Time: 0 minutes 00:36:17.550 Critical Temperature Time: 0 minutes 00:36:17.550 00:36:17.550 Number of Queues 00:36:17.550 ================ 00:36:17.550 Number of I/O Submission Queues: 64 00:36:17.550 Number of I/O Completion Queues: 64 00:36:17.550 00:36:17.550 ZNS Specific Controller Data 00:36:17.550 ============================ 00:36:17.550 Zone Append Size Limit: 0 00:36:17.550 00:36:17.550 00:36:17.550 Active Namespaces 00:36:17.550 ================= 00:36:17.550 Namespace ID:1 00:36:17.550 Error Recovery Timeout: Unlimited 00:36:17.550 Command Set Identifier: [2024-04-18 09:05:19.528578] nvme_ctrlr.c:3485:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 69866 terminated unexpected 00:36:17.550 NVM (00h) 00:36:17.550 Deallocate: Supported 00:36:17.550 Deallocated/Unwritten Error: Supported 00:36:17.550 Deallocated Read Value: All 0x00 00:36:17.550 Deallocate in Write Zeroes: Not Supported 00:36:17.550 Deallocated Guard Field: 0xFFFF 00:36:17.550 Flush: Supported 00:36:17.550 Reservation: Not Supported 00:36:17.550 Namespace Sharing Capabilities: Private 00:36:17.550 Size (in LBAs): 1310720 (5GiB) 00:36:17.550 Capacity (in LBAs): 1310720 (5GiB) 00:36:17.550 Utilization (in LBAs): 1310720 (5GiB) 00:36:17.550 Thin Provisioning: Not Supported 00:36:17.550 Per-NS Atomic Units: No 00:36:17.550 Maximum Single Source Range Length: 128 00:36:17.550 Maximum Copy Length: 128 00:36:17.550 Maximum Source Range Count: 128 00:36:17.550 NGUID/EUI64 Never Reused: No 00:36:17.550 Namespace Write Protected: No 00:36:17.550 Number of LBA Formats: 8 00:36:17.550 Current LBA Format: LBA Format #04 00:36:17.550 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:17.550 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:17.550 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:17.550 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:17.550 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:17.550 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:17.550 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:17.550 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:17.550 00:36:17.550 ===================================================== 00:36:17.550 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:36:17.550 ===================================================== 00:36:17.550 Controller Capabilities/Features 00:36:17.550 ================================ 00:36:17.550 Vendor ID: 1b36 00:36:17.550 Subsystem Vendor ID: 1af4 00:36:17.550 Serial Number: 12343 00:36:17.550 Model Number: QEMU NVMe Ctrl 00:36:17.550 Firmware Version: 8.0.0 00:36:17.550 Recommended Arb Burst: 6 00:36:17.550 IEEE OUI Identifier: 00 54 52 00:36:17.550 Multi-path I/O 00:36:17.550 May have multiple subsystem ports: No 00:36:17.550 May have multiple controllers: Yes 00:36:17.550 Associated with SR-IOV VF: No 00:36:17.550 Max Data Transfer Size: 524288 00:36:17.550 Max Number of Namespaces: 256 00:36:17.550 Max Number of I/O Queues: 64 00:36:17.550 NVMe Specification Version (VS): 1.4 00:36:17.550 NVMe Specification Version (Identify): 1.4 00:36:17.550 Maximum Queue Entries: 2048 00:36:17.550 Contiguous Queues Required: Yes 00:36:17.550 Arbitration Mechanisms Supported 00:36:17.550 Weighted Round Robin: Not Supported 00:36:17.550 Vendor Specific: Not Supported 00:36:17.550 Reset Timeout: 7500 ms 00:36:17.550 Doorbell Stride: 4 bytes 00:36:17.550 NVM Subsystem Reset: Not Supported 00:36:17.550 Command Sets Supported 00:36:17.550 NVM Command Set: Supported 00:36:17.550 Boot Partition: Not Supported 00:36:17.550 Memory Page Size Minimum: 4096 bytes 00:36:17.550 Memory Page Size Maximum: 65536 bytes 00:36:17.550 Persistent Memory Region: Not Supported 00:36:17.550 Optional Asynchronous Events Supported 00:36:17.550 Namespace Attribute Notices: Supported 00:36:17.550 Firmware Activation Notices: Not Supported 00:36:17.550 ANA Change Notices: Not Supported 00:36:17.550 PLE Aggregate Log Change Notices: Not Supported 00:36:17.550 LBA Status Info Alert Notices: Not Supported 00:36:17.550 EGE Aggregate Log Change Notices: Not Supported 00:36:17.550 Normal NVM Subsystem Shutdown event: Not Supported 00:36:17.550 Zone Descriptor Change Notices: Not Supported 00:36:17.550 Discovery Log Change Notices: Not Supported 00:36:17.550 Controller Attributes 00:36:17.550 128-bit Host Identifier: Not Supported 00:36:17.550 Non-Operational Permissive Mode: Not Supported 00:36:17.550 NVM Sets: Not Supported 00:36:17.550 Read Recovery Levels: Not Supported 00:36:17.550 Endurance Groups: Supported 00:36:17.550 Predictable Latency Mode: Not Supported 00:36:17.550 Traffic Based Keep ALive: Not Supported 00:36:17.550 Namespace Granularity: Not Supported 00:36:17.550 SQ Associations: Not Supported 00:36:17.550 UUID List: Not Supported 00:36:17.550 Multi-Domain Subsystem: Not Supported 00:36:17.550 Fixed Capacity Management: Not Supported 00:36:17.550 Variable Capacity Management: Not Supported 00:36:17.550 Delete Endurance Group: Not Supported 00:36:17.550 Delete NVM Set: Not Supported 00:36:17.550 Extended LBA Formats Supported: Supported 00:36:17.550 Flexible Data Placement Supported: Supported 00:36:17.550 00:36:17.550 Controller Memory Buffer Support 00:36:17.550 ================================ 00:36:17.550 Supported: No 00:36:17.550 00:36:17.550 Persistent Memory Region Support 00:36:17.550 ================================ 00:36:17.550 Supported: No 00:36:17.550 00:36:17.550 Admin Command Set Attributes 00:36:17.550 ============================ 00:36:17.550 Security Send/Receive: Not Supported 00:36:17.550 Format NVM: Supported 00:36:17.550 Firmware Activate/Download: Not Supported 00:36:17.550 Namespace Management: Supported 00:36:17.550 Device Self-Test: Not Supported 00:36:17.550 Directives: Supported 00:36:17.550 NVMe-MI: Not Supported 00:36:17.550 Virtualization Management: Not Supported 00:36:17.550 Doorbell Buffer Config: Supported 00:36:17.550 Get LBA Status Capability: Not Supported 00:36:17.550 Command & Feature Lockdown Capability: Not Supported 00:36:17.550 Abort Command Limit: 4 00:36:17.550 Async Event Request Limit: 4 00:36:17.550 Number of Firmware Slots: N/A 00:36:17.550 Firmware Slot 1 Read-Only: N/A 00:36:17.550 Firmware Activation Without Reset: N/A 00:36:17.550 Multiple Update Detection Support: N/A 00:36:17.550 Firmware Update Granularity: No Information Provided 00:36:17.550 Per-Namespace SMART Log: Yes 00:36:17.550 Asymmetric Namespace Access Log Page: Not Supported 00:36:17.550 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:36:17.550 Command Effects Log Page: Supported 00:36:17.550 Get Log Page Extended Data: Supported 00:36:17.550 Telemetry Log Pages: Not Supported 00:36:17.550 Persistent Event Log Pages: Not Supported 00:36:17.550 Supported Log Pages Log Page: May Support 00:36:17.550 Commands Supported & Effects Log Page: Not Supported 00:36:17.550 Feature Identifiers & Effects Log Page:May Support 00:36:17.550 NVMe-MI Commands & Effects Log Page: May Support 00:36:17.550 Data Area 4 for Telemetry Log: Not Supported 00:36:17.550 Error Log Page Entries Supported: 1 00:36:17.550 Keep Alive: Not Supported 00:36:17.550 00:36:17.550 NVM Command Set Attributes 00:36:17.550 ========================== 00:36:17.550 Submission Queue Entry Size 00:36:17.550 Max: 64 00:36:17.550 Min: 64 00:36:17.550 Completion Queue Entry Size 00:36:17.550 Max: 16 00:36:17.550 Min: 16 00:36:17.550 Number of Namespaces: 256 00:36:17.550 Compare Command: Supported 00:36:17.550 Write Uncorrectable Command: Not Supported 00:36:17.550 Dataset Management Command: Supported 00:36:17.550 Write Zeroes Command: Supported 00:36:17.550 Set Features Save Field: Supported 00:36:17.550 Reservations: Not Supported 00:36:17.550 Timestamp: Supported 00:36:17.550 Copy: Supported 00:36:17.550 Volatile Write Cache: Present 00:36:17.550 Atomic Write Unit (Normal): 1 00:36:17.550 Atomic Write Unit (PFail): 1 00:36:17.550 Atomic Compare & Write Unit: 1 00:36:17.550 Fused Compare & Write: Not Supported 00:36:17.550 Scatter-Gather List 00:36:17.550 SGL Command Set: Supported 00:36:17.551 SGL Keyed: Not Supported 00:36:17.551 SGL Bit Bucket Descriptor: Not Supported 00:36:17.551 SGL Metadata Pointer: Not Supported 00:36:17.551 Oversized SGL: Not Supported 00:36:17.551 SGL Metadata Address: Not Supported 00:36:17.551 SGL Offset: Not Supported 00:36:17.551 Transport SGL Data Block: Not Supported 00:36:17.551 Replay Protected Memory Block: Not Supported 00:36:17.551 00:36:17.551 Firmware Slot Information 00:36:17.551 ========================= 00:36:17.551 Active slot: 1 00:36:17.551 Slot 1 Firmware Revision: 1.0 00:36:17.551 00:36:17.551 00:36:17.551 Commands Supported and Effects 00:36:17.551 ============================== 00:36:17.551 Admin Commands 00:36:17.551 -------------- 00:36:17.551 Delete I/O Submission Queue (00h): Supported 00:36:17.551 Create I/O Submission Queue (01h): Supported 00:36:17.551 Get Log Page (02h): Supported 00:36:17.551 Delete I/O Completion Queue (04h): Supported 00:36:17.551 Create I/O Completion Queue (05h): Supported 00:36:17.551 Identify (06h): Supported 00:36:17.551 Abort (08h): Supported 00:36:17.551 Set Features (09h): Supported 00:36:17.551 Get Features (0Ah): Supported 00:36:17.551 Asynchronous Event Request (0Ch): Supported 00:36:17.551 Namespace Attachment (15h): Supported NS-Inventory-Change 00:36:17.551 Directive Send (19h): Supported 00:36:17.551 Directive Receive (1Ah): Supported 00:36:17.551 Virtualization Management (1Ch): Supported 00:36:17.551 Doorbell Buffer Config (7Ch): Supported 00:36:17.551 Format NVM (80h): Supported LBA-Change 00:36:17.551 I/O Commands 00:36:17.551 ------------ 00:36:17.551 Flush (00h): Supported LBA-Change 00:36:17.551 Write (01h): Supported LBA-Change 00:36:17.551 Read (02h): Supported 00:36:17.551 Compare (05h): Supported 00:36:17.551 Write Zeroes (08h): Supported LBA-Change 00:36:17.551 Dataset Management (09h): Supported LBA-Change 00:36:17.551 Unknown (0Ch): Supported 00:36:17.551 Unknown (12h): Supported 00:36:17.551 Copy (19h): Supported LBA-Change 00:36:17.551 Unknown (1Dh): Supported LBA-Change 00:36:17.551 00:36:17.551 Error Log 00:36:17.551 ========= 00:36:17.551 00:36:17.551 Arbitration 00:36:17.551 =========== 00:36:17.551 Arbitration Burst: no limit 00:36:17.551 00:36:17.551 Power Management 00:36:17.551 ================ 00:36:17.551 Number of Power States: 1 00:36:17.551 Current Power State: Power State #0 00:36:17.551 Power State #0: 00:36:17.551 Max Power: 25.00 W 00:36:17.551 Non-Operational State: Operational 00:36:17.551 Entry Latency: 16 microseconds 00:36:17.551 Exit Latency: 4 microseconds 00:36:17.551 Relative Read Throughput: 0 00:36:17.551 Relative Read Latency: 0 00:36:17.551 Relative Write Throughput: 0 00:36:17.551 Relative Write Latency: 0 00:36:17.551 Idle Power: Not Reported 00:36:17.551 Active Power: Not Reported 00:36:17.551 Non-Operational Permissive Mode: Not Supported 00:36:17.551 00:36:17.551 Health Information 00:36:17.551 ================== 00:36:17.551 Critical Warnings: 00:36:17.551 Available Spare Space: OK 00:36:17.551 Temperature: OK 00:36:17.551 Device Reliability: OK 00:36:17.551 Read Only: No 00:36:17.551 Volatile Memory Backup: OK 00:36:17.551 Current Temperature: 323 Kelvin (50 Celsius) 00:36:17.551 Temperature Threshold: 343 Kelvin (70 Celsius) 00:36:17.551 Available Spare: 0% 00:36:17.551 Available Spare Threshold: 0% 00:36:17.551 Life Percentage Used: 0% 00:36:17.551 Data Units Read: 700 00:36:17.551 Data Units Written: 594 00:36:17.551 Host Read Commands: 32923 00:36:17.551 Host Write Commands: 31513 00:36:17.551 Controller Busy Time: 0 minutes 00:36:17.551 Power Cycles: 0 00:36:17.551 Power On Hours: 0 hours 00:36:17.551 Unsafe Shutdowns: 0 00:36:17.551 Unrecoverable Media Errors: 0 00:36:17.551 Lifetime Error Log Entries: 0 00:36:17.551 Warning Temperature Time: 0 minutes 00:36:17.551 Critical Temperature Time: 0 minutes 00:36:17.551 00:36:17.551 Number of Queues 00:36:17.551 ================ 00:36:17.551 Number of I/O Submission Queues: 64 00:36:17.551 Number of I/O Completion Queues: 64 00:36:17.551 00:36:17.551 ZNS Specific Controller Data 00:36:17.551 ============================ 00:36:17.551 Zone Append Size Limit: 0 00:36:17.551 00:36:17.551 00:36:17.551 Active Namespaces 00:36:17.551 ================= 00:36:17.551 Namespace ID:1 00:36:17.551 Error Recovery Timeout: Unlimited 00:36:17.551 Command Set Identifier: NVM (00h) 00:36:17.551 Deallocate: Supported 00:36:17.551 Deallocated/Unwritten Error: Supported 00:36:17.551 Deallocated Read Value: All 0x00 00:36:17.551 Deallocate in Write Zeroes: Not Supported 00:36:17.551 Deallocated Guard Field: 0xFFFF 00:36:17.551 Flush: Supported 00:36:17.551 Reservation: Not Supported 00:36:17.551 Namespace Sharing Capabilities: Multiple Controllers 00:36:17.551 Size (in LBAs): 262144 (1GiB) 00:36:17.551 Capacity (in LBAs): 262144 (1GiB) 00:36:17.551 Utilization (in LBAs): 262144 (1GiB) 00:36:17.551 Thin Provisioning: Not Supported 00:36:17.551 Per-NS Atomic Units: No 00:36:17.551 Maximum Single Source Range Length: 128 00:36:17.551 Maximum Copy Length: 128 00:36:17.551 Maximum Source Range Count: 128 00:36:17.551 NGUID/EUI64 Never Reused: No 00:36:17.551 Namespace Write Protected: No 00:36:17.551 Endurance group ID: 1 00:36:17.551 Number of LBA Formats: 8 00:36:17.551 Current LBA Format: LBA Format #04 00:36:17.551 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:17.551 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:17.551 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:17.551 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:17.551 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:17.551 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:17.551 LBA Format #06: Data Size:[2024-04-18 09:05:19.530233] nvme_ctrlr.c:3485:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 69866 terminated unexpected 00:36:17.551 4096 Metadata Size: 16 00:36:17.551 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:17.551 00:36:17.551 Get Feature FDP: 00:36:17.551 ================ 00:36:17.551 Enabled: Yes 00:36:17.551 FDP configuration index: 0 00:36:17.551 00:36:17.551 FDP configurations log page 00:36:17.551 =========================== 00:36:17.551 Number of FDP configurations: 1 00:36:17.551 Version: 0 00:36:17.551 Size: 112 00:36:17.551 FDP Configuration Descriptor: 0 00:36:17.551 Descriptor Size: 96 00:36:17.551 Reclaim Group Identifier format: 2 00:36:17.551 FDP Volatile Write Cache: Not Present 00:36:17.551 FDP Configuration: Valid 00:36:17.551 Vendor Specific Size: 0 00:36:17.551 Number of Reclaim Groups: 2 00:36:17.551 Number of Recalim Unit Handles: 8 00:36:17.551 Max Placement Identifiers: 128 00:36:17.551 Number of Namespaces Suppprted: 256 00:36:17.551 Reclaim unit Nominal Size: 6000000 bytes 00:36:17.551 Estimated Reclaim Unit Time Limit: Not Reported 00:36:17.551 RUH Desc #000: RUH Type: Initially Isolated 00:36:17.551 RUH Desc #001: RUH Type: Initially Isolated 00:36:17.551 RUH Desc #002: RUH Type: Initially Isolated 00:36:17.551 RUH Desc #003: RUH Type: Initially Isolated 00:36:17.551 RUH Desc #004: RUH Type: Initially Isolated 00:36:17.551 RUH Desc #005: RUH Type: Initially Isolated 00:36:17.551 RUH Desc #006: RUH Type: Initially Isolated 00:36:17.551 RUH Desc #007: RUH Type: Initially Isolated 00:36:17.551 00:36:17.551 FDP reclaim unit handle usage log page 00:36:17.551 ====================================== 00:36:17.551 Number of Reclaim Unit Handles: 8 00:36:17.551 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:36:17.551 RUH Usage Desc #001: RUH Attributes: Unused 00:36:17.551 RUH Usage Desc #002: RUH Attributes: Unused 00:36:17.551 RUH Usage Desc #003: RUH Attributes: Unused 00:36:17.551 RUH Usage Desc #004: RUH Attributes: Unused 00:36:17.551 RUH Usage Desc #005: RUH Attributes: Unused 00:36:17.551 RUH Usage Desc #006: RUH Attributes: Unused 00:36:17.551 RUH Usage Desc #007: RUH Attributes: Unused 00:36:17.551 00:36:17.551 FDP statistics log page 00:36:17.551 ======================= 00:36:17.551 Host bytes with metadata written: 380805120 00:36:17.551 Media bytes with metadata written: 380846080 00:36:17.551 Media bytes erased: 0 00:36:17.551 00:36:17.551 FDP events log page 00:36:17.551 =================== 00:36:17.551 Number of FDP events: 0 00:36:17.551 00:36:17.551 ===================================================== 00:36:17.551 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:36:17.551 ===================================================== 00:36:17.551 Controller Capabilities/Features 00:36:17.551 ================================ 00:36:17.551 Vendor ID: 1b36 00:36:17.551 Subsystem Vendor ID: 1af4 00:36:17.551 Serial Number: 12342 00:36:17.552 Model Number: QEMU NVMe Ctrl 00:36:17.552 Firmware Version: 8.0.0 00:36:17.552 Recommended Arb Burst: 6 00:36:17.552 IEEE OUI Identifier: 00 54 52 00:36:17.552 Multi-path I/O 00:36:17.552 May have multiple subsystem ports: No 00:36:17.552 May have multiple controllers: No 00:36:17.552 Associated with SR-IOV VF: No 00:36:17.552 Max Data Transfer Size: 524288 00:36:17.552 Max Number of Namespaces: 256 00:36:17.552 Max Number of I/O Queues: 64 00:36:17.552 NVMe Specification Version (VS): 1.4 00:36:17.552 NVMe Specification Version (Identify): 1.4 00:36:17.552 Maximum Queue Entries: 2048 00:36:17.552 Contiguous Queues Required: Yes 00:36:17.552 Arbitration Mechanisms Supported 00:36:17.552 Weighted Round Robin: Not Supported 00:36:17.552 Vendor Specific: Not Supported 00:36:17.552 Reset Timeout: 7500 ms 00:36:17.552 Doorbell Stride: 4 bytes 00:36:17.552 NVM Subsystem Reset: Not Supported 00:36:17.552 Command Sets Supported 00:36:17.552 NVM Command Set: Supported 00:36:17.552 Boot Partition: Not Supported 00:36:17.552 Memory Page Size Minimum: 4096 bytes 00:36:17.552 Memory Page Size Maximum: 65536 bytes 00:36:17.552 Persistent Memory Region: Not Supported 00:36:17.552 Optional Asynchronous Events Supported 00:36:17.552 Namespace Attribute Notices: Supported 00:36:17.552 Firmware Activation Notices: Not Supported 00:36:17.552 ANA Change Notices: Not Supported 00:36:17.552 PLE Aggregate Log Change Notices: Not Supported 00:36:17.552 LBA Status Info Alert Notices: Not Supported 00:36:17.552 EGE Aggregate Log Change Notices: Not Supported 00:36:17.552 Normal NVM Subsystem Shutdown event: Not Supported 00:36:17.552 Zone Descriptor Change Notices: Not Supported 00:36:17.552 Discovery Log Change Notices: Not Supported 00:36:17.552 Controller Attributes 00:36:17.552 128-bit Host Identifier: Not Supported 00:36:17.552 Non-Operational Permissive Mode: Not Supported 00:36:17.552 NVM Sets: Not Supported 00:36:17.552 Read Recovery Levels: Not Supported 00:36:17.552 Endurance Groups: Not Supported 00:36:17.552 Predictable Latency Mode: Not Supported 00:36:17.552 Traffic Based Keep ALive: Not Supported 00:36:17.552 Namespace Granularity: Not Supported 00:36:17.552 SQ Associations: Not Supported 00:36:17.552 UUID List: Not Supported 00:36:17.552 Multi-Domain Subsystem: Not Supported 00:36:17.552 Fixed Capacity Management: Not Supported 00:36:17.552 Variable Capacity Management: Not Supported 00:36:17.552 Delete Endurance Group: Not Supported 00:36:17.552 Delete NVM Set: Not Supported 00:36:17.552 Extended LBA Formats Supported: Supported 00:36:17.552 Flexible Data Placement Supported: Not Supported 00:36:17.552 00:36:17.552 Controller Memory Buffer Support 00:36:17.552 ================================ 00:36:17.552 Supported: No 00:36:17.552 00:36:17.552 Persistent Memory Region Support 00:36:17.552 ================================ 00:36:17.552 Supported: No 00:36:17.552 00:36:17.552 Admin Command Set Attributes 00:36:17.552 ============================ 00:36:17.552 Security Send/Receive: Not Supported 00:36:17.552 Format NVM: Supported 00:36:17.552 Firmware Activate/Download: Not Supported 00:36:17.552 Namespace Management: Supported 00:36:17.552 Device Self-Test: Not Supported 00:36:17.552 Directives: Supported 00:36:17.552 NVMe-MI: Not Supported 00:36:17.552 Virtualization Management: Not Supported 00:36:17.552 Doorbell Buffer Config: Supported 00:36:17.552 Get LBA Status Capability: Not Supported 00:36:17.552 Command & Feature Lockdown Capability: Not Supported 00:36:17.552 Abort Command Limit: 4 00:36:17.552 Async Event Request Limit: 4 00:36:17.552 Number of Firmware Slots: N/A 00:36:17.552 Firmware Slot 1 Read-Only: N/A 00:36:17.552 Firmware Activation Without Reset: N/A 00:36:17.552 Multiple Update Detection Support: N/A 00:36:17.552 Firmware Update Granularity: No Information Provided 00:36:17.552 Per-Namespace SMART Log: Yes 00:36:17.552 Asymmetric Namespace Access Log Page: Not Supported 00:36:17.552 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:36:17.552 Command Effects Log Page: Supported 00:36:17.552 Get Log Page Extended Data: Supported 00:36:17.552 Telemetry Log Pages: Not Supported 00:36:17.552 Persistent Event Log Pages: Not Supported 00:36:17.552 Supported Log Pages Log Page: May Support 00:36:17.552 Commands Supported & Effects Log Page: Not Supported 00:36:17.552 Feature Identifiers & Effects Log Page:May Support 00:36:17.552 NVMe-MI Commands & Effects Log Page: May Support 00:36:17.552 Data Area 4 for Telemetry Log: Not Supported 00:36:17.552 Error Log Page Entries Supported: 1 00:36:17.552 Keep Alive: Not Supported 00:36:17.552 00:36:17.552 NVM Command Set Attributes 00:36:17.552 ========================== 00:36:17.552 Submission Queue Entry Size 00:36:17.552 Max: 64 00:36:17.552 Min: 64 00:36:17.552 Completion Queue Entry Size 00:36:17.552 Max: 16 00:36:17.552 Min: 16 00:36:17.552 Number of Namespaces: 256 00:36:17.552 Compare Command: Supported 00:36:17.552 Write Uncorrectable Command: Not Supported 00:36:17.552 Dataset Management Command: Supported 00:36:17.552 Write Zeroes Command: Supported 00:36:17.552 Set Features Save Field: Supported 00:36:17.552 Reservations: Not Supported 00:36:17.552 Timestamp: Supported 00:36:17.552 Copy: Supported 00:36:17.552 Volatile Write Cache: Present 00:36:17.552 Atomic Write Unit (Normal): 1 00:36:17.552 Atomic Write Unit (PFail): 1 00:36:17.552 Atomic Compare & Write Unit: 1 00:36:17.552 Fused Compare & Write: Not Supported 00:36:17.552 Scatter-Gather List 00:36:17.552 SGL Command Set: Supported 00:36:17.552 SGL Keyed: Not Supported 00:36:17.552 SGL Bit Bucket Descriptor: Not Supported 00:36:17.552 SGL Metadata Pointer: Not Supported 00:36:17.552 Oversized SGL: Not Supported 00:36:17.552 SGL Metadata Address: Not Supported 00:36:17.552 SGL Offset: Not Supported 00:36:17.552 Transport SGL Data Block: Not Supported 00:36:17.552 Replay Protected Memory Block: Not Supported 00:36:17.552 00:36:17.552 Firmware Slot Information 00:36:17.552 ========================= 00:36:17.552 Active slot: 1 00:36:17.552 Slot 1 Firmware Revision: 1.0 00:36:17.552 00:36:17.552 00:36:17.552 Commands Supported and Effects 00:36:17.552 ============================== 00:36:17.552 Admin Commands 00:36:17.552 -------------- 00:36:17.552 Delete I/O Submission Queue (00h): Supported 00:36:17.552 Create I/O Submission Queue (01h): Supported 00:36:17.552 Get Log Page (02h): Supported 00:36:17.552 Delete I/O Completion Queue (04h): Supported 00:36:17.552 Create I/O Completion Queue (05h): Supported 00:36:17.552 Identify (06h): Supported 00:36:17.552 Abort (08h): Supported 00:36:17.552 Set Features (09h): Supported 00:36:17.552 Get Features (0Ah): Supported 00:36:17.552 Asynchronous Event Request (0Ch): Supported 00:36:17.552 Namespace Attachment (15h): Supported NS-Inventory-Change 00:36:17.552 Directive Send (19h): Supported 00:36:17.553 Directive Receive (1Ah): Supported 00:36:17.553 Virtualization Management (1Ch): Supported 00:36:17.553 Doorbell Buffer Config (7Ch): Supported 00:36:17.553 Format NVM (80h): Supported LBA-Change 00:36:17.553 I/O Commands 00:36:17.553 ------------ 00:36:17.553 Flush (00h): Supported LBA-Change 00:36:17.553 Write (01h): Supported LBA-Change 00:36:17.553 Read (02h): Supported 00:36:17.553 Compare (05h): Supported 00:36:17.553 Write Zeroes (08h): Supported LBA-Change 00:36:17.553 Dataset Management (09h): Supported LBA-Change 00:36:17.553 Unknown (0Ch): Supported 00:36:17.553 Unknown (12h): Supported 00:36:17.553 Copy (19h): Supported LBA-Change 00:36:17.553 Unknown (1Dh): Supported LBA-Change 00:36:17.553 00:36:17.553 Error Log 00:36:17.553 ========= 00:36:17.553 00:36:17.553 Arbitration 00:36:17.553 =========== 00:36:17.553 Arbitration Burst: no limit 00:36:17.553 00:36:17.553 Power Management 00:36:17.553 ================ 00:36:17.553 Number of Power States: 1 00:36:17.553 Current Power State: Power State #0 00:36:17.553 Power State #0: 00:36:17.553 Max Power: 25.00 W 00:36:17.553 Non-Operational State: Operational 00:36:17.553 Entry Latency: 16 microseconds 00:36:17.553 Exit Latency: 4 microseconds 00:36:17.553 Relative Read Throughput: 0 00:36:17.553 Relative Read Latency: 0 00:36:17.553 Relative Write Throughput: 0 00:36:17.553 Relative Write Latency: 0 00:36:17.553 Idle Power: Not Reported 00:36:17.553 Active Power: Not Reported 00:36:17.553 Non-Operational Permissive Mode: Not Supported 00:36:17.553 00:36:17.553 Health Information 00:36:17.553 ================== 00:36:17.553 Critical Warnings: 00:36:17.553 Available Spare Space: OK 00:36:17.553 Temperature: OK 00:36:17.553 Device Reliability: OK 00:36:17.553 Read Only: No 00:36:17.553 Volatile Memory Backup: OK 00:36:17.553 Current Temperature: 323 Kelvin (50 Celsius) 00:36:17.553 Temperature Threshold: 343 Kelvin (70 Celsius) 00:36:17.553 Available Spare: 0% 00:36:17.553 Available Spare Threshold: 0% 00:36:17.553 Life Percentage Used: 0% 00:36:17.553 Data Units Read: 1972 00:36:17.553 Data Units Written: 1652 00:36:17.553 Host Read Commands: 96989 00:36:17.553 Host Write Commands: 92760 00:36:17.553 Controller Busy Time: 0 minutes 00:36:17.553 Power Cycles: 0 00:36:17.553 Power On Hours: 0 hours 00:36:17.553 Unsafe Shutdowns: 0 00:36:17.553 Unrecoverable Media Errors: 0 00:36:17.553 Lifetime Error Log Entries: 0 00:36:17.553 Warning Temperature Time: 0 minutes 00:36:17.553 Critical Temperature Time: 0 minutes 00:36:17.553 00:36:17.553 Number of Queues 00:36:17.553 ================ 00:36:17.553 Number of I/O Submission Queues: 64 00:36:17.553 Number of I/O Completion Queues: 64 00:36:17.553 00:36:17.553 ZNS Specific Controller Data 00:36:17.553 ============================ 00:36:17.553 Zone Append Size Limit: 0 00:36:17.553 00:36:17.553 00:36:17.553 Active Namespaces 00:36:17.553 ================= 00:36:17.553 Namespace ID:1 00:36:17.553 Error Recovery Timeout: Unlimited 00:36:17.553 Command Set Identifier: NVM (00h) 00:36:17.553 Deallocate: Supported 00:36:17.553 Deallocated/Unwritten Error: Supported 00:36:17.553 Deallocated Read Value: All 0x00 00:36:17.553 Deallocate in Write Zeroes: Not Supported 00:36:17.553 Deallocated Guard Field: 0xFFFF 00:36:17.553 Flush: Supported 00:36:17.553 Reservation: Not Supported 00:36:17.553 Namespace Sharing Capabilities: Private 00:36:17.553 Size (in LBAs): 1048576 (4GiB) 00:36:17.553 Capacity (in LBAs): 1048576 (4GiB) 00:36:17.553 Utilization (in LBAs): 1048576 (4GiB) 00:36:17.553 Thin Provisioning: Not Supported 00:36:17.553 Per-NS Atomic Units: No 00:36:17.553 Maximum Single Source Range Length: 128 00:36:17.553 Maximum Copy Length: 128 00:36:17.553 Maximum Source Range Count: 128 00:36:17.553 NGUID/EUI64 Never Reused: No 00:36:17.553 Namespace Write Protected: No 00:36:17.553 Number of LBA Formats: 8 00:36:17.553 Current LBA Format: LBA Format #04 00:36:17.553 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:17.553 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:17.553 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:17.553 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:17.553 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:17.553 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:17.553 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:17.553 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:17.553 00:36:17.553 Namespace ID:2 00:36:17.553 Error Recovery Timeout: Unlimited 00:36:17.553 Command Set Identifier: NVM (00h) 00:36:17.553 Deallocate: Supported 00:36:17.553 Deallocated/Unwritten Error: Supported 00:36:17.553 Deallocated Read Value: All 0x00 00:36:17.553 Deallocate in Write Zeroes: Not Supported 00:36:17.553 Deallocated Guard Field: 0xFFFF 00:36:17.553 Flush: Supported 00:36:17.553 Reservation: Not Supported 00:36:17.553 Namespace Sharing Capabilities: Private 00:36:17.553 Size (in LBAs): 1048576 (4GiB) 00:36:17.553 Capacity (in LBAs): 1048576 (4GiB) 00:36:17.553 Utilization (in LBAs): 1048576 (4GiB) 00:36:17.553 Thin Provisioning: Not Supported 00:36:17.553 Per-NS Atomic Units: No 00:36:17.553 Maximum Single Source Range Length: 128 00:36:17.553 Maximum Copy Length: 128 00:36:17.553 Maximum Source Range Count: 128 00:36:17.553 NGUID/EUI64 Never Reused: No 00:36:17.553 Namespace Write Protected: No 00:36:17.553 Number of LBA Formats: 8 00:36:17.553 Current LBA Format: LBA Format #04 00:36:17.553 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:17.553 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:17.553 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:17.553 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:17.553 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:17.553 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:17.553 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:17.553 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:17.553 00:36:17.553 Namespace ID:3 00:36:17.553 Error Recovery Timeout: Unlimited 00:36:17.553 Command Set Identifier: NVM (00h) 00:36:17.553 Deallocate: Supported 00:36:17.553 Deallocated/Unwritten Error: Supported 00:36:17.553 Deallocated Read Value: All 0x00 00:36:17.553 Deallocate in Write Zeroes: Not Supported 00:36:17.553 Deallocated Guard Field: 0xFFFF 00:36:17.553 Flush: Supported 00:36:17.553 Reservation: Not Supported 00:36:17.553 Namespace Sharing Capabilities: Private 00:36:17.553 Size (in LBAs): 1048576 (4GiB) 00:36:17.553 Capacity (in LBAs): 1048576 (4GiB) 00:36:17.553 Utilization (in LBAs): 1048576 (4GiB) 00:36:17.553 Thin Provisioning: Not Supported 00:36:17.553 Per-NS Atomic Units: No 00:36:17.553 Maximum Single Source Range Length: 128 00:36:17.553 Maximum Copy Length: 128 00:36:17.553 Maximum Source Range Count: 128 00:36:17.553 NGUID/EUI64 Never Reused: No 00:36:17.553 Namespace Write Protected: No 00:36:17.553 Number of LBA Formats: 8 00:36:17.553 Current LBA Format: LBA Format #04 00:36:17.553 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:17.553 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:17.553 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:17.553 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:17.553 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:17.553 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:17.553 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:17.553 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:17.553 00:36:17.553 09:05:19 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:36:17.553 09:05:19 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:36:18.131 ===================================================== 00:36:18.131 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:36:18.131 ===================================================== 00:36:18.131 Controller Capabilities/Features 00:36:18.131 ================================ 00:36:18.131 Vendor ID: 1b36 00:36:18.131 Subsystem Vendor ID: 1af4 00:36:18.131 Serial Number: 12340 00:36:18.131 Model Number: QEMU NVMe Ctrl 00:36:18.131 Firmware Version: 8.0.0 00:36:18.131 Recommended Arb Burst: 6 00:36:18.131 IEEE OUI Identifier: 00 54 52 00:36:18.131 Multi-path I/O 00:36:18.131 May have multiple subsystem ports: No 00:36:18.131 May have multiple controllers: No 00:36:18.131 Associated with SR-IOV VF: No 00:36:18.131 Max Data Transfer Size: 524288 00:36:18.131 Max Number of Namespaces: 256 00:36:18.131 Max Number of I/O Queues: 64 00:36:18.131 NVMe Specification Version (VS): 1.4 00:36:18.131 NVMe Specification Version (Identify): 1.4 00:36:18.131 Maximum Queue Entries: 2048 00:36:18.131 Contiguous Queues Required: Yes 00:36:18.131 Arbitration Mechanisms Supported 00:36:18.131 Weighted Round Robin: Not Supported 00:36:18.131 Vendor Specific: Not Supported 00:36:18.131 Reset Timeout: 7500 ms 00:36:18.131 Doorbell Stride: 4 bytes 00:36:18.131 NVM Subsystem Reset: Not Supported 00:36:18.131 Command Sets Supported 00:36:18.131 NVM Command Set: Supported 00:36:18.131 Boot Partition: Not Supported 00:36:18.131 Memory Page Size Minimum: 4096 bytes 00:36:18.131 Memory Page Size Maximum: 65536 bytes 00:36:18.131 Persistent Memory Region: Not Supported 00:36:18.131 Optional Asynchronous Events Supported 00:36:18.131 Namespace Attribute Notices: Supported 00:36:18.131 Firmware Activation Notices: Not Supported 00:36:18.131 ANA Change Notices: Not Supported 00:36:18.131 PLE Aggregate Log Change Notices: Not Supported 00:36:18.131 LBA Status Info Alert Notices: Not Supported 00:36:18.131 EGE Aggregate Log Change Notices: Not Supported 00:36:18.131 Normal NVM Subsystem Shutdown event: Not Supported 00:36:18.131 Zone Descriptor Change Notices: Not Supported 00:36:18.131 Discovery Log Change Notices: Not Supported 00:36:18.131 Controller Attributes 00:36:18.131 128-bit Host Identifier: Not Supported 00:36:18.131 Non-Operational Permissive Mode: Not Supported 00:36:18.131 NVM Sets: Not Supported 00:36:18.131 Read Recovery Levels: Not Supported 00:36:18.131 Endurance Groups: Not Supported 00:36:18.131 Predictable Latency Mode: Not Supported 00:36:18.131 Traffic Based Keep ALive: Not Supported 00:36:18.131 Namespace Granularity: Not Supported 00:36:18.131 SQ Associations: Not Supported 00:36:18.131 UUID List: Not Supported 00:36:18.131 Multi-Domain Subsystem: Not Supported 00:36:18.131 Fixed Capacity Management: Not Supported 00:36:18.131 Variable Capacity Management: Not Supported 00:36:18.131 Delete Endurance Group: Not Supported 00:36:18.131 Delete NVM Set: Not Supported 00:36:18.131 Extended LBA Formats Supported: Supported 00:36:18.131 Flexible Data Placement Supported: Not Supported 00:36:18.131 00:36:18.131 Controller Memory Buffer Support 00:36:18.131 ================================ 00:36:18.131 Supported: No 00:36:18.131 00:36:18.131 Persistent Memory Region Support 00:36:18.131 ================================ 00:36:18.131 Supported: No 00:36:18.131 00:36:18.131 Admin Command Set Attributes 00:36:18.131 ============================ 00:36:18.131 Security Send/Receive: Not Supported 00:36:18.132 Format NVM: Supported 00:36:18.132 Firmware Activate/Download: Not Supported 00:36:18.132 Namespace Management: Supported 00:36:18.132 Device Self-Test: Not Supported 00:36:18.132 Directives: Supported 00:36:18.132 NVMe-MI: Not Supported 00:36:18.132 Virtualization Management: Not Supported 00:36:18.132 Doorbell Buffer Config: Supported 00:36:18.132 Get LBA Status Capability: Not Supported 00:36:18.132 Command & Feature Lockdown Capability: Not Supported 00:36:18.132 Abort Command Limit: 4 00:36:18.132 Async Event Request Limit: 4 00:36:18.132 Number of Firmware Slots: N/A 00:36:18.132 Firmware Slot 1 Read-Only: N/A 00:36:18.132 Firmware Activation Without Reset: N/A 00:36:18.132 Multiple Update Detection Support: N/A 00:36:18.132 Firmware Update Granularity: No Information Provided 00:36:18.132 Per-Namespace SMART Log: Yes 00:36:18.132 Asymmetric Namespace Access Log Page: Not Supported 00:36:18.132 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:36:18.132 Command Effects Log Page: Supported 00:36:18.132 Get Log Page Extended Data: Supported 00:36:18.132 Telemetry Log Pages: Not Supported 00:36:18.132 Persistent Event Log Pages: Not Supported 00:36:18.132 Supported Log Pages Log Page: May Support 00:36:18.132 Commands Supported & Effects Log Page: Not Supported 00:36:18.132 Feature Identifiers & Effects Log Page:May Support 00:36:18.132 NVMe-MI Commands & Effects Log Page: May Support 00:36:18.132 Data Area 4 for Telemetry Log: Not Supported 00:36:18.132 Error Log Page Entries Supported: 1 00:36:18.132 Keep Alive: Not Supported 00:36:18.132 00:36:18.132 NVM Command Set Attributes 00:36:18.132 ========================== 00:36:18.132 Submission Queue Entry Size 00:36:18.132 Max: 64 00:36:18.132 Min: 64 00:36:18.132 Completion Queue Entry Size 00:36:18.132 Max: 16 00:36:18.132 Min: 16 00:36:18.132 Number of Namespaces: 256 00:36:18.132 Compare Command: Supported 00:36:18.132 Write Uncorrectable Command: Not Supported 00:36:18.132 Dataset Management Command: Supported 00:36:18.132 Write Zeroes Command: Supported 00:36:18.132 Set Features Save Field: Supported 00:36:18.132 Reservations: Not Supported 00:36:18.132 Timestamp: Supported 00:36:18.132 Copy: Supported 00:36:18.132 Volatile Write Cache: Present 00:36:18.132 Atomic Write Unit (Normal): 1 00:36:18.132 Atomic Write Unit (PFail): 1 00:36:18.132 Atomic Compare & Write Unit: 1 00:36:18.132 Fused Compare & Write: Not Supported 00:36:18.132 Scatter-Gather List 00:36:18.132 SGL Command Set: Supported 00:36:18.132 SGL Keyed: Not Supported 00:36:18.132 SGL Bit Bucket Descriptor: Not Supported 00:36:18.132 SGL Metadata Pointer: Not Supported 00:36:18.132 Oversized SGL: Not Supported 00:36:18.132 SGL Metadata Address: Not Supported 00:36:18.132 SGL Offset: Not Supported 00:36:18.132 Transport SGL Data Block: Not Supported 00:36:18.132 Replay Protected Memory Block: Not Supported 00:36:18.132 00:36:18.132 Firmware Slot Information 00:36:18.132 ========================= 00:36:18.132 Active slot: 1 00:36:18.132 Slot 1 Firmware Revision: 1.0 00:36:18.132 00:36:18.132 00:36:18.132 Commands Supported and Effects 00:36:18.132 ============================== 00:36:18.132 Admin Commands 00:36:18.132 -------------- 00:36:18.132 Delete I/O Submission Queue (00h): Supported 00:36:18.132 Create I/O Submission Queue (01h): Supported 00:36:18.132 Get Log Page (02h): Supported 00:36:18.132 Delete I/O Completion Queue (04h): Supported 00:36:18.132 Create I/O Completion Queue (05h): Supported 00:36:18.132 Identify (06h): Supported 00:36:18.132 Abort (08h): Supported 00:36:18.132 Set Features (09h): Supported 00:36:18.132 Get Features (0Ah): Supported 00:36:18.132 Asynchronous Event Request (0Ch): Supported 00:36:18.132 Namespace Attachment (15h): Supported NS-Inventory-Change 00:36:18.132 Directive Send (19h): Supported 00:36:18.132 Directive Receive (1Ah): Supported 00:36:18.132 Virtualization Management (1Ch): Supported 00:36:18.132 Doorbell Buffer Config (7Ch): Supported 00:36:18.132 Format NVM (80h): Supported LBA-Change 00:36:18.132 I/O Commands 00:36:18.132 ------------ 00:36:18.132 Flush (00h): Supported LBA-Change 00:36:18.132 Write (01h): Supported LBA-Change 00:36:18.132 Read (02h): Supported 00:36:18.132 Compare (05h): Supported 00:36:18.132 Write Zeroes (08h): Supported LBA-Change 00:36:18.132 Dataset Management (09h): Supported LBA-Change 00:36:18.132 Unknown (0Ch): Supported 00:36:18.132 Unknown (12h): Supported 00:36:18.132 Copy (19h): Supported LBA-Change 00:36:18.132 Unknown (1Dh): Supported LBA-Change 00:36:18.132 00:36:18.132 Error Log 00:36:18.132 ========= 00:36:18.132 00:36:18.132 Arbitration 00:36:18.132 =========== 00:36:18.132 Arbitration Burst: no limit 00:36:18.132 00:36:18.132 Power Management 00:36:18.132 ================ 00:36:18.132 Number of Power States: 1 00:36:18.132 Current Power State: Power State #0 00:36:18.132 Power State #0: 00:36:18.132 Max Power: 25.00 W 00:36:18.132 Non-Operational State: Operational 00:36:18.132 Entry Latency: 16 microseconds 00:36:18.132 Exit Latency: 4 microseconds 00:36:18.132 Relative Read Throughput: 0 00:36:18.132 Relative Read Latency: 0 00:36:18.132 Relative Write Throughput: 0 00:36:18.132 Relative Write Latency: 0 00:36:18.132 Idle Power: Not Reported 00:36:18.132 Active Power: Not Reported 00:36:18.132 Non-Operational Permissive Mode: Not Supported 00:36:18.132 00:36:18.132 Health Information 00:36:18.132 ================== 00:36:18.132 Critical Warnings: 00:36:18.132 Available Spare Space: OK 00:36:18.132 Temperature: OK 00:36:18.132 Device Reliability: OK 00:36:18.132 Read Only: No 00:36:18.132 Volatile Memory Backup: OK 00:36:18.132 Current Temperature: 323 Kelvin (50 Celsius) 00:36:18.132 Temperature Threshold: 343 Kelvin (70 Celsius) 00:36:18.132 Available Spare: 0% 00:36:18.132 Available Spare Threshold: 0% 00:36:18.132 Life Percentage Used: 0% 00:36:18.132 Data Units Read: 928 00:36:18.132 Data Units Written: 761 00:36:18.132 Host Read Commands: 46845 00:36:18.132 Host Write Commands: 45311 00:36:18.132 Controller Busy Time: 0 minutes 00:36:18.132 Power Cycles: 0 00:36:18.132 Power On Hours: 0 hours 00:36:18.132 Unsafe Shutdowns: 0 00:36:18.132 Unrecoverable Media Errors: 0 00:36:18.132 Lifetime Error Log Entries: 0 00:36:18.132 Warning Temperature Time: 0 minutes 00:36:18.132 Critical Temperature Time: 0 minutes 00:36:18.132 00:36:18.132 Number of Queues 00:36:18.132 ================ 00:36:18.132 Number of I/O Submission Queues: 64 00:36:18.132 Number of I/O Completion Queues: 64 00:36:18.132 00:36:18.132 ZNS Specific Controller Data 00:36:18.132 ============================ 00:36:18.132 Zone Append Size Limit: 0 00:36:18.132 00:36:18.132 00:36:18.132 Active Namespaces 00:36:18.132 ================= 00:36:18.132 Namespace ID:1 00:36:18.132 Error Recovery Timeout: Unlimited 00:36:18.132 Command Set Identifier: NVM (00h) 00:36:18.132 Deallocate: Supported 00:36:18.132 Deallocated/Unwritten Error: Supported 00:36:18.132 Deallocated Read Value: All 0x00 00:36:18.132 Deallocate in Write Zeroes: Not Supported 00:36:18.132 Deallocated Guard Field: 0xFFFF 00:36:18.132 Flush: Supported 00:36:18.132 Reservation: Not Supported 00:36:18.132 Metadata Transferred as: Separate Metadata Buffer 00:36:18.132 Namespace Sharing Capabilities: Private 00:36:18.132 Size (in LBAs): 1548666 (5GiB) 00:36:18.132 Capacity (in LBAs): 1548666 (5GiB) 00:36:18.132 Utilization (in LBAs): 1548666 (5GiB) 00:36:18.132 Thin Provisioning: Not Supported 00:36:18.132 Per-NS Atomic Units: No 00:36:18.132 Maximum Single Source Range Length: 128 00:36:18.132 Maximum Copy Length: 128 00:36:18.132 Maximum Source Range Count: 128 00:36:18.132 NGUID/EUI64 Never Reused: No 00:36:18.132 Namespace Write Protected: No 00:36:18.132 Number of LBA Formats: 8 00:36:18.132 Current LBA Format: LBA Format #07 00:36:18.132 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:18.132 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:18.132 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:18.132 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:18.132 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:18.132 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:18.132 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:18.132 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:18.132 00:36:18.132 09:05:20 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:36:18.132 09:05:20 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:36:18.397 ===================================================== 00:36:18.397 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:36:18.397 ===================================================== 00:36:18.397 Controller Capabilities/Features 00:36:18.397 ================================ 00:36:18.397 Vendor ID: 1b36 00:36:18.397 Subsystem Vendor ID: 1af4 00:36:18.397 Serial Number: 12341 00:36:18.397 Model Number: QEMU NVMe Ctrl 00:36:18.397 Firmware Version: 8.0.0 00:36:18.397 Recommended Arb Burst: 6 00:36:18.397 IEEE OUI Identifier: 00 54 52 00:36:18.397 Multi-path I/O 00:36:18.397 May have multiple subsystem ports: No 00:36:18.397 May have multiple controllers: No 00:36:18.397 Associated with SR-IOV VF: No 00:36:18.397 Max Data Transfer Size: 524288 00:36:18.397 Max Number of Namespaces: 256 00:36:18.397 Max Number of I/O Queues: 64 00:36:18.397 NVMe Specification Version (VS): 1.4 00:36:18.397 NVMe Specification Version (Identify): 1.4 00:36:18.397 Maximum Queue Entries: 2048 00:36:18.397 Contiguous Queues Required: Yes 00:36:18.397 Arbitration Mechanisms Supported 00:36:18.397 Weighted Round Robin: Not Supported 00:36:18.397 Vendor Specific: Not Supported 00:36:18.397 Reset Timeout: 7500 ms 00:36:18.397 Doorbell Stride: 4 bytes 00:36:18.397 NVM Subsystem Reset: Not Supported 00:36:18.397 Command Sets Supported 00:36:18.397 NVM Command Set: Supported 00:36:18.397 Boot Partition: Not Supported 00:36:18.397 Memory Page Size Minimum: 4096 bytes 00:36:18.397 Memory Page Size Maximum: 65536 bytes 00:36:18.397 Persistent Memory Region: Not Supported 00:36:18.397 Optional Asynchronous Events Supported 00:36:18.397 Namespace Attribute Notices: Supported 00:36:18.397 Firmware Activation Notices: Not Supported 00:36:18.397 ANA Change Notices: Not Supported 00:36:18.397 PLE Aggregate Log Change Notices: Not Supported 00:36:18.397 LBA Status Info Alert Notices: Not Supported 00:36:18.397 EGE Aggregate Log Change Notices: Not Supported 00:36:18.397 Normal NVM Subsystem Shutdown event: Not Supported 00:36:18.397 Zone Descriptor Change Notices: Not Supported 00:36:18.397 Discovery Log Change Notices: Not Supported 00:36:18.397 Controller Attributes 00:36:18.397 128-bit Host Identifier: Not Supported 00:36:18.397 Non-Operational Permissive Mode: Not Supported 00:36:18.397 NVM Sets: Not Supported 00:36:18.397 Read Recovery Levels: Not Supported 00:36:18.397 Endurance Groups: Not Supported 00:36:18.397 Predictable Latency Mode: Not Supported 00:36:18.397 Traffic Based Keep ALive: Not Supported 00:36:18.397 Namespace Granularity: Not Supported 00:36:18.397 SQ Associations: Not Supported 00:36:18.397 UUID List: Not Supported 00:36:18.397 Multi-Domain Subsystem: Not Supported 00:36:18.397 Fixed Capacity Management: Not Supported 00:36:18.397 Variable Capacity Management: Not Supported 00:36:18.397 Delete Endurance Group: Not Supported 00:36:18.397 Delete NVM Set: Not Supported 00:36:18.397 Extended LBA Formats Supported: Supported 00:36:18.397 Flexible Data Placement Supported: Not Supported 00:36:18.397 00:36:18.397 Controller Memory Buffer Support 00:36:18.397 ================================ 00:36:18.397 Supported: No 00:36:18.397 00:36:18.397 Persistent Memory Region Support 00:36:18.397 ================================ 00:36:18.397 Supported: No 00:36:18.397 00:36:18.397 Admin Command Set Attributes 00:36:18.397 ============================ 00:36:18.397 Security Send/Receive: Not Supported 00:36:18.397 Format NVM: Supported 00:36:18.397 Firmware Activate/Download: Not Supported 00:36:18.397 Namespace Management: Supported 00:36:18.397 Device Self-Test: Not Supported 00:36:18.397 Directives: Supported 00:36:18.397 NVMe-MI: Not Supported 00:36:18.397 Virtualization Management: Not Supported 00:36:18.397 Doorbell Buffer Config: Supported 00:36:18.397 Get LBA Status Capability: Not Supported 00:36:18.397 Command & Feature Lockdown Capability: Not Supported 00:36:18.397 Abort Command Limit: 4 00:36:18.397 Async Event Request Limit: 4 00:36:18.397 Number of Firmware Slots: N/A 00:36:18.397 Firmware Slot 1 Read-Only: N/A 00:36:18.397 Firmware Activation Without Reset: N/A 00:36:18.397 Multiple Update Detection Support: N/A 00:36:18.397 Firmware Update Granularity: No Information Provided 00:36:18.397 Per-Namespace SMART Log: Yes 00:36:18.397 Asymmetric Namespace Access Log Page: Not Supported 00:36:18.397 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:36:18.397 Command Effects Log Page: Supported 00:36:18.397 Get Log Page Extended Data: Supported 00:36:18.397 Telemetry Log Pages: Not Supported 00:36:18.398 Persistent Event Log Pages: Not Supported 00:36:18.398 Supported Log Pages Log Page: May Support 00:36:18.398 Commands Supported & Effects Log Page: Not Supported 00:36:18.398 Feature Identifiers & Effects Log Page:May Support 00:36:18.398 NVMe-MI Commands & Effects Log Page: May Support 00:36:18.398 Data Area 4 for Telemetry Log: Not Supported 00:36:18.398 Error Log Page Entries Supported: 1 00:36:18.398 Keep Alive: Not Supported 00:36:18.398 00:36:18.398 NVM Command Set Attributes 00:36:18.398 ========================== 00:36:18.398 Submission Queue Entry Size 00:36:18.398 Max: 64 00:36:18.398 Min: 64 00:36:18.398 Completion Queue Entry Size 00:36:18.398 Max: 16 00:36:18.398 Min: 16 00:36:18.398 Number of Namespaces: 256 00:36:18.398 Compare Command: Supported 00:36:18.398 Write Uncorrectable Command: Not Supported 00:36:18.398 Dataset Management Command: Supported 00:36:18.398 Write Zeroes Command: Supported 00:36:18.398 Set Features Save Field: Supported 00:36:18.398 Reservations: Not Supported 00:36:18.398 Timestamp: Supported 00:36:18.398 Copy: Supported 00:36:18.398 Volatile Write Cache: Present 00:36:18.398 Atomic Write Unit (Normal): 1 00:36:18.398 Atomic Write Unit (PFail): 1 00:36:18.398 Atomic Compare & Write Unit: 1 00:36:18.398 Fused Compare & Write: Not Supported 00:36:18.398 Scatter-Gather List 00:36:18.398 SGL Command Set: Supported 00:36:18.398 SGL Keyed: Not Supported 00:36:18.398 SGL Bit Bucket Descriptor: Not Supported 00:36:18.398 SGL Metadata Pointer: Not Supported 00:36:18.398 Oversized SGL: Not Supported 00:36:18.398 SGL Metadata Address: Not Supported 00:36:18.398 SGL Offset: Not Supported 00:36:18.398 Transport SGL Data Block: Not Supported 00:36:18.398 Replay Protected Memory Block: Not Supported 00:36:18.398 00:36:18.398 Firmware Slot Information 00:36:18.398 ========================= 00:36:18.398 Active slot: 1 00:36:18.398 Slot 1 Firmware Revision: 1.0 00:36:18.398 00:36:18.398 00:36:18.398 Commands Supported and Effects 00:36:18.398 ============================== 00:36:18.398 Admin Commands 00:36:18.398 -------------- 00:36:18.398 Delete I/O Submission Queue (00h): Supported 00:36:18.398 Create I/O Submission Queue (01h): Supported 00:36:18.398 Get Log Page (02h): Supported 00:36:18.398 Delete I/O Completion Queue (04h): Supported 00:36:18.398 Create I/O Completion Queue (05h): Supported 00:36:18.398 Identify (06h): Supported 00:36:18.398 Abort (08h): Supported 00:36:18.398 Set Features (09h): Supported 00:36:18.398 Get Features (0Ah): Supported 00:36:18.398 Asynchronous Event Request (0Ch): Supported 00:36:18.398 Namespace Attachment (15h): Supported NS-Inventory-Change 00:36:18.398 Directive Send (19h): Supported 00:36:18.398 Directive Receive (1Ah): Supported 00:36:18.398 Virtualization Management (1Ch): Supported 00:36:18.398 Doorbell Buffer Config (7Ch): Supported 00:36:18.398 Format NVM (80h): Supported LBA-Change 00:36:18.398 I/O Commands 00:36:18.398 ------------ 00:36:18.398 Flush (00h): Supported LBA-Change 00:36:18.398 Write (01h): Supported LBA-Change 00:36:18.398 Read (02h): Supported 00:36:18.398 Compare (05h): Supported 00:36:18.398 Write Zeroes (08h): Supported LBA-Change 00:36:18.398 Dataset Management (09h): Supported LBA-Change 00:36:18.398 Unknown (0Ch): Supported 00:36:18.398 Unknown (12h): Supported 00:36:18.398 Copy (19h): Supported LBA-Change 00:36:18.398 Unknown (1Dh): Supported LBA-Change 00:36:18.398 00:36:18.398 Error Log 00:36:18.398 ========= 00:36:18.398 00:36:18.398 Arbitration 00:36:18.398 =========== 00:36:18.398 Arbitration Burst: no limit 00:36:18.398 00:36:18.398 Power Management 00:36:18.398 ================ 00:36:18.398 Number of Power States: 1 00:36:18.398 Current Power State: Power State #0 00:36:18.398 Power State #0: 00:36:18.398 Max Power: 25.00 W 00:36:18.398 Non-Operational State: Operational 00:36:18.398 Entry Latency: 16 microseconds 00:36:18.398 Exit Latency: 4 microseconds 00:36:18.398 Relative Read Throughput: 0 00:36:18.398 Relative Read Latency: 0 00:36:18.398 Relative Write Throughput: 0 00:36:18.398 Relative Write Latency: 0 00:36:18.398 Idle Power: Not Reported 00:36:18.398 Active Power: Not Reported 00:36:18.398 Non-Operational Permissive Mode: Not Supported 00:36:18.398 00:36:18.398 Health Information 00:36:18.398 ================== 00:36:18.398 Critical Warnings: 00:36:18.398 Available Spare Space: OK 00:36:18.398 Temperature: OK 00:36:18.398 Device Reliability: OK 00:36:18.398 Read Only: No 00:36:18.398 Volatile Memory Backup: OK 00:36:18.398 Current Temperature: 323 Kelvin (50 Celsius) 00:36:18.398 Temperature Threshold: 343 Kelvin (70 Celsius) 00:36:18.398 Available Spare: 0% 00:36:18.398 Available Spare Threshold: 0% 00:36:18.398 Life Percentage Used: 0% 00:36:18.398 Data Units Read: 717 00:36:18.398 Data Units Written: 563 00:36:18.398 Host Read Commands: 33233 00:36:18.398 Host Write Commands: 30910 00:36:18.398 Controller Busy Time: 0 minutes 00:36:18.398 Power Cycles: 0 00:36:18.398 Power On Hours: 0 hours 00:36:18.398 Unsafe Shutdowns: 0 00:36:18.398 Unrecoverable Media Errors: 0 00:36:18.398 Lifetime Error Log Entries: 0 00:36:18.398 Warning Temperature Time: 0 minutes 00:36:18.398 Critical Temperature Time: 0 minutes 00:36:18.398 00:36:18.398 Number of Queues 00:36:18.398 ================ 00:36:18.398 Number of I/O Submission Queues: 64 00:36:18.398 Number of I/O Completion Queues: 64 00:36:18.398 00:36:18.398 ZNS Specific Controller Data 00:36:18.398 ============================ 00:36:18.398 Zone Append Size Limit: 0 00:36:18.398 00:36:18.398 00:36:18.398 Active Namespaces 00:36:18.398 ================= 00:36:18.398 Namespace ID:1 00:36:18.398 Error Recovery Timeout: Unlimited 00:36:18.398 Command Set Identifier: NVM (00h) 00:36:18.398 Deallocate: Supported 00:36:18.398 Deallocated/Unwritten Error: Supported 00:36:18.398 Deallocated Read Value: All 0x00 00:36:18.398 Deallocate in Write Zeroes: Not Supported 00:36:18.398 Deallocated Guard Field: 0xFFFF 00:36:18.398 Flush: Supported 00:36:18.398 Reservation: Not Supported 00:36:18.398 Namespace Sharing Capabilities: Private 00:36:18.398 Size (in LBAs): 1310720 (5GiB) 00:36:18.398 Capacity (in LBAs): 1310720 (5GiB) 00:36:18.398 Utilization (in LBAs): 1310720 (5GiB) 00:36:18.398 Thin Provisioning: Not Supported 00:36:18.398 Per-NS Atomic Units: No 00:36:18.398 Maximum Single Source Range Length: 128 00:36:18.398 Maximum Copy Length: 128 00:36:18.398 Maximum Source Range Count: 128 00:36:18.398 NGUID/EUI64 Never Reused: No 00:36:18.398 Namespace Write Protected: No 00:36:18.398 Number of LBA Formats: 8 00:36:18.398 Current LBA Format: LBA Format #04 00:36:18.398 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:18.398 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:18.398 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:18.398 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:18.398 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:18.398 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:18.398 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:18.398 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:18.398 00:36:18.398 09:05:20 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:36:18.398 09:05:20 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:36:18.657 ===================================================== 00:36:18.657 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:36:18.657 ===================================================== 00:36:18.657 Controller Capabilities/Features 00:36:18.657 ================================ 00:36:18.657 Vendor ID: 1b36 00:36:18.657 Subsystem Vendor ID: 1af4 00:36:18.657 Serial Number: 12342 00:36:18.657 Model Number: QEMU NVMe Ctrl 00:36:18.657 Firmware Version: 8.0.0 00:36:18.657 Recommended Arb Burst: 6 00:36:18.657 IEEE OUI Identifier: 00 54 52 00:36:18.657 Multi-path I/O 00:36:18.657 May have multiple subsystem ports: No 00:36:18.657 May have multiple controllers: No 00:36:18.657 Associated with SR-IOV VF: No 00:36:18.657 Max Data Transfer Size: 524288 00:36:18.657 Max Number of Namespaces: 256 00:36:18.657 Max Number of I/O Queues: 64 00:36:18.658 NVMe Specification Version (VS): 1.4 00:36:18.658 NVMe Specification Version (Identify): 1.4 00:36:18.658 Maximum Queue Entries: 2048 00:36:18.658 Contiguous Queues Required: Yes 00:36:18.658 Arbitration Mechanisms Supported 00:36:18.658 Weighted Round Robin: Not Supported 00:36:18.658 Vendor Specific: Not Supported 00:36:18.658 Reset Timeout: 7500 ms 00:36:18.658 Doorbell Stride: 4 bytes 00:36:18.658 NVM Subsystem Reset: Not Supported 00:36:18.658 Command Sets Supported 00:36:18.658 NVM Command Set: Supported 00:36:18.658 Boot Partition: Not Supported 00:36:18.658 Memory Page Size Minimum: 4096 bytes 00:36:18.658 Memory Page Size Maximum: 65536 bytes 00:36:18.658 Persistent Memory Region: Not Supported 00:36:18.658 Optional Asynchronous Events Supported 00:36:18.658 Namespace Attribute Notices: Supported 00:36:18.658 Firmware Activation Notices: Not Supported 00:36:18.658 ANA Change Notices: Not Supported 00:36:18.658 PLE Aggregate Log Change Notices: Not Supported 00:36:18.658 LBA Status Info Alert Notices: Not Supported 00:36:18.658 EGE Aggregate Log Change Notices: Not Supported 00:36:18.658 Normal NVM Subsystem Shutdown event: Not Supported 00:36:18.658 Zone Descriptor Change Notices: Not Supported 00:36:18.658 Discovery Log Change Notices: Not Supported 00:36:18.658 Controller Attributes 00:36:18.658 128-bit Host Identifier: Not Supported 00:36:18.658 Non-Operational Permissive Mode: Not Supported 00:36:18.658 NVM Sets: Not Supported 00:36:18.658 Read Recovery Levels: Not Supported 00:36:18.658 Endurance Groups: Not Supported 00:36:18.658 Predictable Latency Mode: Not Supported 00:36:18.658 Traffic Based Keep ALive: Not Supported 00:36:18.658 Namespace Granularity: Not Supported 00:36:18.658 SQ Associations: Not Supported 00:36:18.658 UUID List: Not Supported 00:36:18.658 Multi-Domain Subsystem: Not Supported 00:36:18.658 Fixed Capacity Management: Not Supported 00:36:18.658 Variable Capacity Management: Not Supported 00:36:18.658 Delete Endurance Group: Not Supported 00:36:18.658 Delete NVM Set: Not Supported 00:36:18.658 Extended LBA Formats Supported: Supported 00:36:18.658 Flexible Data Placement Supported: Not Supported 00:36:18.658 00:36:18.658 Controller Memory Buffer Support 00:36:18.658 ================================ 00:36:18.658 Supported: No 00:36:18.658 00:36:18.658 Persistent Memory Region Support 00:36:18.658 ================================ 00:36:18.658 Supported: No 00:36:18.658 00:36:18.658 Admin Command Set Attributes 00:36:18.658 ============================ 00:36:18.658 Security Send/Receive: Not Supported 00:36:18.658 Format NVM: Supported 00:36:18.658 Firmware Activate/Download: Not Supported 00:36:18.658 Namespace Management: Supported 00:36:18.658 Device Self-Test: Not Supported 00:36:18.658 Directives: Supported 00:36:18.658 NVMe-MI: Not Supported 00:36:18.658 Virtualization Management: Not Supported 00:36:18.658 Doorbell Buffer Config: Supported 00:36:18.658 Get LBA Status Capability: Not Supported 00:36:18.658 Command & Feature Lockdown Capability: Not Supported 00:36:18.658 Abort Command Limit: 4 00:36:18.658 Async Event Request Limit: 4 00:36:18.658 Number of Firmware Slots: N/A 00:36:18.658 Firmware Slot 1 Read-Only: N/A 00:36:18.658 Firmware Activation Without Reset: N/A 00:36:18.658 Multiple Update Detection Support: N/A 00:36:18.658 Firmware Update Granularity: No Information Provided 00:36:18.658 Per-Namespace SMART Log: Yes 00:36:18.658 Asymmetric Namespace Access Log Page: Not Supported 00:36:18.658 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:36:18.658 Command Effects Log Page: Supported 00:36:18.658 Get Log Page Extended Data: Supported 00:36:18.658 Telemetry Log Pages: Not Supported 00:36:18.658 Persistent Event Log Pages: Not Supported 00:36:18.658 Supported Log Pages Log Page: May Support 00:36:18.658 Commands Supported & Effects Log Page: Not Supported 00:36:18.658 Feature Identifiers & Effects Log Page:May Support 00:36:18.658 NVMe-MI Commands & Effects Log Page: May Support 00:36:18.658 Data Area 4 for Telemetry Log: Not Supported 00:36:18.658 Error Log Page Entries Supported: 1 00:36:18.658 Keep Alive: Not Supported 00:36:18.658 00:36:18.658 NVM Command Set Attributes 00:36:18.658 ========================== 00:36:18.658 Submission Queue Entry Size 00:36:18.658 Max: 64 00:36:18.658 Min: 64 00:36:18.658 Completion Queue Entry Size 00:36:18.658 Max: 16 00:36:18.658 Min: 16 00:36:18.658 Number of Namespaces: 256 00:36:18.658 Compare Command: Supported 00:36:18.658 Write Uncorrectable Command: Not Supported 00:36:18.658 Dataset Management Command: Supported 00:36:18.658 Write Zeroes Command: Supported 00:36:18.658 Set Features Save Field: Supported 00:36:18.658 Reservations: Not Supported 00:36:18.658 Timestamp: Supported 00:36:18.658 Copy: Supported 00:36:18.658 Volatile Write Cache: Present 00:36:18.658 Atomic Write Unit (Normal): 1 00:36:18.658 Atomic Write Unit (PFail): 1 00:36:18.658 Atomic Compare & Write Unit: 1 00:36:18.658 Fused Compare & Write: Not Supported 00:36:18.658 Scatter-Gather List 00:36:18.658 SGL Command Set: Supported 00:36:18.658 SGL Keyed: Not Supported 00:36:18.658 SGL Bit Bucket Descriptor: Not Supported 00:36:18.658 SGL Metadata Pointer: Not Supported 00:36:18.658 Oversized SGL: Not Supported 00:36:18.658 SGL Metadata Address: Not Supported 00:36:18.658 SGL Offset: Not Supported 00:36:18.658 Transport SGL Data Block: Not Supported 00:36:18.658 Replay Protected Memory Block: Not Supported 00:36:18.658 00:36:18.658 Firmware Slot Information 00:36:18.658 ========================= 00:36:18.658 Active slot: 1 00:36:18.658 Slot 1 Firmware Revision: 1.0 00:36:18.658 00:36:18.658 00:36:18.658 Commands Supported and Effects 00:36:18.658 ============================== 00:36:18.658 Admin Commands 00:36:18.658 -------------- 00:36:18.658 Delete I/O Submission Queue (00h): Supported 00:36:18.658 Create I/O Submission Queue (01h): Supported 00:36:18.658 Get Log Page (02h): Supported 00:36:18.658 Delete I/O Completion Queue (04h): Supported 00:36:18.658 Create I/O Completion Queue (05h): Supported 00:36:18.658 Identify (06h): Supported 00:36:18.658 Abort (08h): Supported 00:36:18.658 Set Features (09h): Supported 00:36:18.658 Get Features (0Ah): Supported 00:36:18.658 Asynchronous Event Request (0Ch): Supported 00:36:18.658 Namespace Attachment (15h): Supported NS-Inventory-Change 00:36:18.658 Directive Send (19h): Supported 00:36:18.658 Directive Receive (1Ah): Supported 00:36:18.658 Virtualization Management (1Ch): Supported 00:36:18.658 Doorbell Buffer Config (7Ch): Supported 00:36:18.658 Format NVM (80h): Supported LBA-Change 00:36:18.658 I/O Commands 00:36:18.658 ------------ 00:36:18.658 Flush (00h): Supported LBA-Change 00:36:18.658 Write (01h): Supported LBA-Change 00:36:18.658 Read (02h): Supported 00:36:18.658 Compare (05h): Supported 00:36:18.658 Write Zeroes (08h): Supported LBA-Change 00:36:18.658 Dataset Management (09h): Supported LBA-Change 00:36:18.658 Unknown (0Ch): Supported 00:36:18.658 Unknown (12h): Supported 00:36:18.658 Copy (19h): Supported LBA-Change 00:36:18.658 Unknown (1Dh): Supported LBA-Change 00:36:18.658 00:36:18.658 Error Log 00:36:18.658 ========= 00:36:18.658 00:36:18.658 Arbitration 00:36:18.658 =========== 00:36:18.658 Arbitration Burst: no limit 00:36:18.658 00:36:18.658 Power Management 00:36:18.658 ================ 00:36:18.658 Number of Power States: 1 00:36:18.658 Current Power State: Power State #0 00:36:18.658 Power State #0: 00:36:18.658 Max Power: 25.00 W 00:36:18.658 Non-Operational State: Operational 00:36:18.658 Entry Latency: 16 microseconds 00:36:18.658 Exit Latency: 4 microseconds 00:36:18.658 Relative Read Throughput: 0 00:36:18.658 Relative Read Latency: 0 00:36:18.658 Relative Write Throughput: 0 00:36:18.658 Relative Write Latency: 0 00:36:18.658 Idle Power: Not Reported 00:36:18.658 Active Power: Not Reported 00:36:18.658 Non-Operational Permissive Mode: Not Supported 00:36:18.658 00:36:18.658 Health Information 00:36:18.658 ================== 00:36:18.658 Critical Warnings: 00:36:18.658 Available Spare Space: OK 00:36:18.658 Temperature: OK 00:36:18.658 Device Reliability: OK 00:36:18.658 Read Only: No 00:36:18.658 Volatile Memory Backup: OK 00:36:18.658 Current Temperature: 323 Kelvin (50 Celsius) 00:36:18.658 Temperature Threshold: 343 Kelvin (70 Celsius) 00:36:18.658 Available Spare: 0% 00:36:18.658 Available Spare Threshold: 0% 00:36:18.658 Life Percentage Used: 0% 00:36:18.658 Data Units Read: 1972 00:36:18.658 Data Units Written: 1652 00:36:18.658 Host Read Commands: 96989 00:36:18.658 Host Write Commands: 92760 00:36:18.658 Controller Busy Time: 0 minutes 00:36:18.658 Power Cycles: 0 00:36:18.658 Power On Hours: 0 hours 00:36:18.659 Unsafe Shutdowns: 0 00:36:18.659 Unrecoverable Media Errors: 0 00:36:18.659 Lifetime Error Log Entries: 0 00:36:18.659 Warning Temperature Time: 0 minutes 00:36:18.659 Critical Temperature Time: 0 minutes 00:36:18.659 00:36:18.659 Number of Queues 00:36:18.659 ================ 00:36:18.659 Number of I/O Submission Queues: 64 00:36:18.659 Number of I/O Completion Queues: 64 00:36:18.659 00:36:18.659 ZNS Specific Controller Data 00:36:18.659 ============================ 00:36:18.659 Zone Append Size Limit: 0 00:36:18.659 00:36:18.659 00:36:18.659 Active Namespaces 00:36:18.659 ================= 00:36:18.659 Namespace ID:1 00:36:18.659 Error Recovery Timeout: Unlimited 00:36:18.659 Command Set Identifier: NVM (00h) 00:36:18.659 Deallocate: Supported 00:36:18.659 Deallocated/Unwritten Error: Supported 00:36:18.659 Deallocated Read Value: All 0x00 00:36:18.659 Deallocate in Write Zeroes: Not Supported 00:36:18.659 Deallocated Guard Field: 0xFFFF 00:36:18.659 Flush: Supported 00:36:18.659 Reservation: Not Supported 00:36:18.659 Namespace Sharing Capabilities: Private 00:36:18.659 Size (in LBAs): 1048576 (4GiB) 00:36:18.659 Capacity (in LBAs): 1048576 (4GiB) 00:36:18.659 Utilization (in LBAs): 1048576 (4GiB) 00:36:18.659 Thin Provisioning: Not Supported 00:36:18.659 Per-NS Atomic Units: No 00:36:18.659 Maximum Single Source Range Length: 128 00:36:18.659 Maximum Copy Length: 128 00:36:18.659 Maximum Source Range Count: 128 00:36:18.659 NGUID/EUI64 Never Reused: No 00:36:18.659 Namespace Write Protected: No 00:36:18.659 Number of LBA Formats: 8 00:36:18.659 Current LBA Format: LBA Format #04 00:36:18.659 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:18.659 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:18.659 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:18.659 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:18.659 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:18.659 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:18.659 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:18.659 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:18.659 00:36:18.659 Namespace ID:2 00:36:18.659 Error Recovery Timeout: Unlimited 00:36:18.659 Command Set Identifier: NVM (00h) 00:36:18.659 Deallocate: Supported 00:36:18.659 Deallocated/Unwritten Error: Supported 00:36:18.659 Deallocated Read Value: All 0x00 00:36:18.659 Deallocate in Write Zeroes: Not Supported 00:36:18.659 Deallocated Guard Field: 0xFFFF 00:36:18.659 Flush: Supported 00:36:18.659 Reservation: Not Supported 00:36:18.659 Namespace Sharing Capabilities: Private 00:36:18.659 Size (in LBAs): 1048576 (4GiB) 00:36:18.659 Capacity (in LBAs): 1048576 (4GiB) 00:36:18.659 Utilization (in LBAs): 1048576 (4GiB) 00:36:18.659 Thin Provisioning: Not Supported 00:36:18.659 Per-NS Atomic Units: No 00:36:18.659 Maximum Single Source Range Length: 128 00:36:18.659 Maximum Copy Length: 128 00:36:18.659 Maximum Source Range Count: 128 00:36:18.659 NGUID/EUI64 Never Reused: No 00:36:18.659 Namespace Write Protected: No 00:36:18.659 Number of LBA Formats: 8 00:36:18.659 Current LBA Format: LBA Format #04 00:36:18.659 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:18.659 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:18.659 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:18.659 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:18.659 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:18.659 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:18.659 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:18.659 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:18.659 00:36:18.659 Namespace ID:3 00:36:18.659 Error Recovery Timeout: Unlimited 00:36:18.659 Command Set Identifier: NVM (00h) 00:36:18.659 Deallocate: Supported 00:36:18.659 Deallocated/Unwritten Error: Supported 00:36:18.659 Deallocated Read Value: All 0x00 00:36:18.659 Deallocate in Write Zeroes: Not Supported 00:36:18.659 Deallocated Guard Field: 0xFFFF 00:36:18.659 Flush: Supported 00:36:18.659 Reservation: Not Supported 00:36:18.659 Namespace Sharing Capabilities: Private 00:36:18.659 Size (in LBAs): 1048576 (4GiB) 00:36:18.659 Capacity (in LBAs): 1048576 (4GiB) 00:36:18.659 Utilization (in LBAs): 1048576 (4GiB) 00:36:18.659 Thin Provisioning: Not Supported 00:36:18.659 Per-NS Atomic Units: No 00:36:18.659 Maximum Single Source Range Length: 128 00:36:18.659 Maximum Copy Length: 128 00:36:18.659 Maximum Source Range Count: 128 00:36:18.659 NGUID/EUI64 Never Reused: No 00:36:18.659 Namespace Write Protected: No 00:36:18.659 Number of LBA Formats: 8 00:36:18.659 Current LBA Format: LBA Format #04 00:36:18.659 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:18.659 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:18.659 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:18.659 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:18.659 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:18.659 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:18.659 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:18.659 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:18.659 00:36:18.659 09:05:20 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:36:18.659 09:05:20 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:36:18.918 ===================================================== 00:36:18.918 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:36:18.918 ===================================================== 00:36:18.918 Controller Capabilities/Features 00:36:18.918 ================================ 00:36:18.918 Vendor ID: 1b36 00:36:18.918 Subsystem Vendor ID: 1af4 00:36:18.918 Serial Number: 12343 00:36:18.918 Model Number: QEMU NVMe Ctrl 00:36:18.918 Firmware Version: 8.0.0 00:36:18.918 Recommended Arb Burst: 6 00:36:18.918 IEEE OUI Identifier: 00 54 52 00:36:18.918 Multi-path I/O 00:36:18.918 May have multiple subsystem ports: No 00:36:18.918 May have multiple controllers: Yes 00:36:18.918 Associated with SR-IOV VF: No 00:36:18.918 Max Data Transfer Size: 524288 00:36:18.918 Max Number of Namespaces: 256 00:36:18.918 Max Number of I/O Queues: 64 00:36:18.918 NVMe Specification Version (VS): 1.4 00:36:18.918 NVMe Specification Version (Identify): 1.4 00:36:18.918 Maximum Queue Entries: 2048 00:36:18.918 Contiguous Queues Required: Yes 00:36:18.918 Arbitration Mechanisms Supported 00:36:18.918 Weighted Round Robin: Not Supported 00:36:18.918 Vendor Specific: Not Supported 00:36:18.918 Reset Timeout: 7500 ms 00:36:18.918 Doorbell Stride: 4 bytes 00:36:18.918 NVM Subsystem Reset: Not Supported 00:36:18.918 Command Sets Supported 00:36:18.918 NVM Command Set: Supported 00:36:18.918 Boot Partition: Not Supported 00:36:18.918 Memory Page Size Minimum: 4096 bytes 00:36:18.918 Memory Page Size Maximum: 65536 bytes 00:36:18.918 Persistent Memory Region: Not Supported 00:36:18.918 Optional Asynchronous Events Supported 00:36:18.918 Namespace Attribute Notices: Supported 00:36:18.918 Firmware Activation Notices: Not Supported 00:36:18.918 ANA Change Notices: Not Supported 00:36:18.918 PLE Aggregate Log Change Notices: Not Supported 00:36:18.918 LBA Status Info Alert Notices: Not Supported 00:36:18.918 EGE Aggregate Log Change Notices: Not Supported 00:36:18.918 Normal NVM Subsystem Shutdown event: Not Supported 00:36:18.918 Zone Descriptor Change Notices: Not Supported 00:36:18.918 Discovery Log Change Notices: Not Supported 00:36:18.919 Controller Attributes 00:36:18.919 128-bit Host Identifier: Not Supported 00:36:18.919 Non-Operational Permissive Mode: Not Supported 00:36:18.919 NVM Sets: Not Supported 00:36:18.919 Read Recovery Levels: Not Supported 00:36:18.919 Endurance Groups: Supported 00:36:18.919 Predictable Latency Mode: Not Supported 00:36:18.919 Traffic Based Keep ALive: Not Supported 00:36:18.919 Namespace Granularity: Not Supported 00:36:18.919 SQ Associations: Not Supported 00:36:18.919 UUID List: Not Supported 00:36:18.919 Multi-Domain Subsystem: Not Supported 00:36:18.919 Fixed Capacity Management: Not Supported 00:36:18.919 Variable Capacity Management: Not Supported 00:36:18.919 Delete Endurance Group: Not Supported 00:36:18.919 Delete NVM Set: Not Supported 00:36:18.919 Extended LBA Formats Supported: Supported 00:36:18.919 Flexible Data Placement Supported: Supported 00:36:18.919 00:36:18.919 Controller Memory Buffer Support 00:36:18.919 ================================ 00:36:18.919 Supported: No 00:36:18.919 00:36:18.919 Persistent Memory Region Support 00:36:18.919 ================================ 00:36:18.919 Supported: No 00:36:18.919 00:36:18.919 Admin Command Set Attributes 00:36:18.919 ============================ 00:36:18.919 Security Send/Receive: Not Supported 00:36:18.919 Format NVM: Supported 00:36:18.919 Firmware Activate/Download: Not Supported 00:36:18.919 Namespace Management: Supported 00:36:18.919 Device Self-Test: Not Supported 00:36:18.919 Directives: Supported 00:36:18.919 NVMe-MI: Not Supported 00:36:18.919 Virtualization Management: Not Supported 00:36:18.919 Doorbell Buffer Config: Supported 00:36:18.919 Get LBA Status Capability: Not Supported 00:36:18.919 Command & Feature Lockdown Capability: Not Supported 00:36:18.919 Abort Command Limit: 4 00:36:18.919 Async Event Request Limit: 4 00:36:18.919 Number of Firmware Slots: N/A 00:36:18.919 Firmware Slot 1 Read-Only: N/A 00:36:18.919 Firmware Activation Without Reset: N/A 00:36:18.919 Multiple Update Detection Support: N/A 00:36:18.919 Firmware Update Granularity: No Information Provided 00:36:18.919 Per-Namespace SMART Log: Yes 00:36:18.919 Asymmetric Namespace Access Log Page: Not Supported 00:36:18.919 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:36:18.919 Command Effects Log Page: Supported 00:36:18.919 Get Log Page Extended Data: Supported 00:36:18.919 Telemetry Log Pages: Not Supported 00:36:18.919 Persistent Event Log Pages: Not Supported 00:36:18.919 Supported Log Pages Log Page: May Support 00:36:18.919 Commands Supported & Effects Log Page: Not Supported 00:36:18.919 Feature Identifiers & Effects Log Page:May Support 00:36:18.919 NVMe-MI Commands & Effects Log Page: May Support 00:36:18.919 Data Area 4 for Telemetry Log: Not Supported 00:36:18.919 Error Log Page Entries Supported: 1 00:36:18.919 Keep Alive: Not Supported 00:36:18.919 00:36:18.919 NVM Command Set Attributes 00:36:18.919 ========================== 00:36:18.919 Submission Queue Entry Size 00:36:18.919 Max: 64 00:36:18.919 Min: 64 00:36:18.919 Completion Queue Entry Size 00:36:18.919 Max: 16 00:36:18.919 Min: 16 00:36:18.919 Number of Namespaces: 256 00:36:18.919 Compare Command: Supported 00:36:18.919 Write Uncorrectable Command: Not Supported 00:36:18.919 Dataset Management Command: Supported 00:36:18.919 Write Zeroes Command: Supported 00:36:18.919 Set Features Save Field: Supported 00:36:18.919 Reservations: Not Supported 00:36:18.919 Timestamp: Supported 00:36:18.919 Copy: Supported 00:36:18.919 Volatile Write Cache: Present 00:36:18.919 Atomic Write Unit (Normal): 1 00:36:18.919 Atomic Write Unit (PFail): 1 00:36:18.919 Atomic Compare & Write Unit: 1 00:36:18.919 Fused Compare & Write: Not Supported 00:36:18.919 Scatter-Gather List 00:36:18.919 SGL Command Set: Supported 00:36:18.919 SGL Keyed: Not Supported 00:36:18.919 SGL Bit Bucket Descriptor: Not Supported 00:36:18.919 SGL Metadata Pointer: Not Supported 00:36:18.919 Oversized SGL: Not Supported 00:36:18.919 SGL Metadata Address: Not Supported 00:36:18.919 SGL Offset: Not Supported 00:36:18.919 Transport SGL Data Block: Not Supported 00:36:18.919 Replay Protected Memory Block: Not Supported 00:36:18.919 00:36:18.919 Firmware Slot Information 00:36:18.919 ========================= 00:36:18.919 Active slot: 1 00:36:18.919 Slot 1 Firmware Revision: 1.0 00:36:18.919 00:36:18.919 00:36:18.919 Commands Supported and Effects 00:36:18.919 ============================== 00:36:18.919 Admin Commands 00:36:18.919 -------------- 00:36:18.919 Delete I/O Submission Queue (00h): Supported 00:36:18.919 Create I/O Submission Queue (01h): Supported 00:36:18.919 Get Log Page (02h): Supported 00:36:18.919 Delete I/O Completion Queue (04h): Supported 00:36:18.919 Create I/O Completion Queue (05h): Supported 00:36:18.919 Identify (06h): Supported 00:36:18.919 Abort (08h): Supported 00:36:18.919 Set Features (09h): Supported 00:36:18.919 Get Features (0Ah): Supported 00:36:18.919 Asynchronous Event Request (0Ch): Supported 00:36:18.919 Namespace Attachment (15h): Supported NS-Inventory-Change 00:36:18.919 Directive Send (19h): Supported 00:36:18.919 Directive Receive (1Ah): Supported 00:36:18.919 Virtualization Management (1Ch): Supported 00:36:18.919 Doorbell Buffer Config (7Ch): Supported 00:36:18.919 Format NVM (80h): Supported LBA-Change 00:36:18.919 I/O Commands 00:36:18.919 ------------ 00:36:18.919 Flush (00h): Supported LBA-Change 00:36:18.919 Write (01h): Supported LBA-Change 00:36:18.919 Read (02h): Supported 00:36:18.919 Compare (05h): Supported 00:36:18.919 Write Zeroes (08h): Supported LBA-Change 00:36:18.919 Dataset Management (09h): Supported LBA-Change 00:36:18.919 Unknown (0Ch): Supported 00:36:18.919 Unknown (12h): Supported 00:36:18.919 Copy (19h): Supported LBA-Change 00:36:18.919 Unknown (1Dh): Supported LBA-Change 00:36:18.919 00:36:18.919 Error Log 00:36:18.919 ========= 00:36:18.919 00:36:18.919 Arbitration 00:36:18.919 =========== 00:36:18.919 Arbitration Burst: no limit 00:36:18.919 00:36:18.919 Power Management 00:36:18.919 ================ 00:36:18.919 Number of Power States: 1 00:36:18.919 Current Power State: Power State #0 00:36:18.919 Power State #0: 00:36:18.919 Max Power: 25.00 W 00:36:18.919 Non-Operational State: Operational 00:36:18.919 Entry Latency: 16 microseconds 00:36:18.919 Exit Latency: 4 microseconds 00:36:18.919 Relative Read Throughput: 0 00:36:18.919 Relative Read Latency: 0 00:36:18.919 Relative Write Throughput: 0 00:36:18.919 Relative Write Latency: 0 00:36:18.919 Idle Power: Not Reported 00:36:18.919 Active Power: Not Reported 00:36:18.919 Non-Operational Permissive Mode: Not Supported 00:36:18.919 00:36:18.919 Health Information 00:36:18.919 ================== 00:36:18.919 Critical Warnings: 00:36:18.919 Available Spare Space: OK 00:36:18.919 Temperature: OK 00:36:18.919 Device Reliability: OK 00:36:18.919 Read Only: No 00:36:18.919 Volatile Memory Backup: OK 00:36:18.919 Current Temperature: 323 Kelvin (50 Celsius) 00:36:18.919 Temperature Threshold: 343 Kelvin (70 Celsius) 00:36:18.919 Available Spare: 0% 00:36:18.919 Available Spare Threshold: 0% 00:36:18.919 Life Percentage Used: 0% 00:36:18.919 Data Units Read: 700 00:36:18.919 Data Units Written: 594 00:36:18.919 Host Read Commands: 32923 00:36:18.919 Host Write Commands: 31513 00:36:18.919 Controller Busy Time: 0 minutes 00:36:18.919 Power Cycles: 0 00:36:18.919 Power On Hours: 0 hours 00:36:18.919 Unsafe Shutdowns: 0 00:36:18.919 Unrecoverable Media Errors: 0 00:36:18.919 Lifetime Error Log Entries: 0 00:36:18.919 Warning Temperature Time: 0 minutes 00:36:18.919 Critical Temperature Time: 0 minutes 00:36:18.919 00:36:18.919 Number of Queues 00:36:18.919 ================ 00:36:18.919 Number of I/O Submission Queues: 64 00:36:18.919 Number of I/O Completion Queues: 64 00:36:18.919 00:36:18.919 ZNS Specific Controller Data 00:36:18.919 ============================ 00:36:18.919 Zone Append Size Limit: 0 00:36:18.919 00:36:18.919 00:36:18.919 Active Namespaces 00:36:18.919 ================= 00:36:18.919 Namespace ID:1 00:36:18.919 Error Recovery Timeout: Unlimited 00:36:18.919 Command Set Identifier: NVM (00h) 00:36:18.919 Deallocate: Supported 00:36:18.919 Deallocated/Unwritten Error: Supported 00:36:18.919 Deallocated Read Value: All 0x00 00:36:18.919 Deallocate in Write Zeroes: Not Supported 00:36:18.919 Deallocated Guard Field: 0xFFFF 00:36:18.919 Flush: Supported 00:36:18.919 Reservation: Not Supported 00:36:18.919 Namespace Sharing Capabilities: Multiple Controllers 00:36:18.919 Size (in LBAs): 262144 (1GiB) 00:36:18.919 Capacity (in LBAs): 262144 (1GiB) 00:36:18.920 Utilization (in LBAs): 262144 (1GiB) 00:36:18.920 Thin Provisioning: Not Supported 00:36:18.920 Per-NS Atomic Units: No 00:36:18.920 Maximum Single Source Range Length: 128 00:36:18.920 Maximum Copy Length: 128 00:36:18.920 Maximum Source Range Count: 128 00:36:18.920 NGUID/EUI64 Never Reused: No 00:36:18.920 Namespace Write Protected: No 00:36:18.920 Endurance group ID: 1 00:36:18.920 Number of LBA Formats: 8 00:36:18.920 Current LBA Format: LBA Format #04 00:36:18.920 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:18.920 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:18.920 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:18.920 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:18.920 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:18.920 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:18.920 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:18.920 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:18.920 00:36:18.920 Get Feature FDP: 00:36:18.920 ================ 00:36:18.920 Enabled: Yes 00:36:18.920 FDP configuration index: 0 00:36:18.920 00:36:18.920 FDP configurations log page 00:36:18.920 =========================== 00:36:18.920 Number of FDP configurations: 1 00:36:18.920 Version: 0 00:36:18.920 Size: 112 00:36:18.920 FDP Configuration Descriptor: 0 00:36:18.920 Descriptor Size: 96 00:36:18.920 Reclaim Group Identifier format: 2 00:36:18.920 FDP Volatile Write Cache: Not Present 00:36:18.920 FDP Configuration: Valid 00:36:18.920 Vendor Specific Size: 0 00:36:18.920 Number of Reclaim Groups: 2 00:36:18.920 Number of Recalim Unit Handles: 8 00:36:18.920 Max Placement Identifiers: 128 00:36:18.920 Number of Namespaces Suppprted: 256 00:36:18.920 Reclaim unit Nominal Size: 6000000 bytes 00:36:18.920 Estimated Reclaim Unit Time Limit: Not Reported 00:36:18.920 RUH Desc #000: RUH Type: Initially Isolated 00:36:18.920 RUH Desc #001: RUH Type: Initially Isolated 00:36:18.920 RUH Desc #002: RUH Type: Initially Isolated 00:36:18.920 RUH Desc #003: RUH Type: Initially Isolated 00:36:18.920 RUH Desc #004: RUH Type: Initially Isolated 00:36:18.920 RUH Desc #005: RUH Type: Initially Isolated 00:36:18.920 RUH Desc #006: RUH Type: Initially Isolated 00:36:18.920 RUH Desc #007: RUH Type: Initially Isolated 00:36:18.920 00:36:18.920 FDP reclaim unit handle usage log page 00:36:19.178 ====================================== 00:36:19.178 Number of Reclaim Unit Handles: 8 00:36:19.178 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:36:19.178 RUH Usage Desc #001: RUH Attributes: Unused 00:36:19.178 RUH Usage Desc #002: RUH Attributes: Unused 00:36:19.178 RUH Usage Desc #003: RUH Attributes: Unused 00:36:19.178 RUH Usage Desc #004: RUH Attributes: Unused 00:36:19.178 RUH Usage Desc #005: RUH Attributes: Unused 00:36:19.178 RUH Usage Desc #006: RUH Attributes: Unused 00:36:19.178 RUH Usage Desc #007: RUH Attributes: Unused 00:36:19.178 00:36:19.178 FDP statistics log page 00:36:19.178 ======================= 00:36:19.178 Host bytes with metadata written: 380805120 00:36:19.178 Media bytes with metadata written: 380846080 00:36:19.178 Media bytes erased: 0 00:36:19.178 00:36:19.178 FDP events log page 00:36:19.178 =================== 00:36:19.178 Number of FDP events: 0 00:36:19.178 00:36:19.178 00:36:19.178 real 0m1.892s 00:36:19.178 user 0m0.654s 00:36:19.178 sys 0m0.968s 00:36:19.178 09:05:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:19.178 ************************************ 00:36:19.178 09:05:21 -- common/autotest_common.sh@10 -- # set +x 00:36:19.178 END TEST nvme_identify 00:36:19.178 ************************************ 00:36:19.178 09:05:21 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:36:19.178 09:05:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:36:19.178 09:05:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:19.178 09:05:21 -- common/autotest_common.sh@10 -- # set +x 00:36:19.178 ************************************ 00:36:19.178 START TEST nvme_perf 00:36:19.179 ************************************ 00:36:19.179 09:05:21 -- common/autotest_common.sh@1111 -- # nvme_perf 00:36:19.179 09:05:21 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:36:20.625 Initializing NVMe Controllers 00:36:20.625 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:36:20.625 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:36:20.625 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:36:20.625 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:36:20.625 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:36:20.625 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:36:20.625 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:36:20.625 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:36:20.625 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:36:20.625 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:36:20.625 Initialization complete. Launching workers. 00:36:20.625 ======================================================== 00:36:20.625 Latency(us) 00:36:20.625 Device Information : IOPS MiB/s Average min max 00:36:20.625 PCIE (0000:00:10.0) NSID 1 from core 0: 12084.54 141.62 10624.09 7835.65 41741.24 00:36:20.625 PCIE (0000:00:11.0) NSID 1 from core 0: 12084.54 141.62 10604.09 7965.90 39082.62 00:36:20.625 PCIE (0000:00:13.0) NSID 1 from core 0: 12084.54 141.62 10582.68 7905.16 36993.34 00:36:20.625 PCIE (0000:00:12.0) NSID 1 from core 0: 12084.54 141.62 10557.77 8021.01 34351.22 00:36:20.625 PCIE (0000:00:12.0) NSID 2 from core 0: 12084.54 141.62 10532.14 7945.69 31197.05 00:36:20.625 PCIE (0000:00:12.0) NSID 3 from core 0: 12084.54 141.62 10506.09 7954.34 28538.73 00:36:20.625 ======================================================== 00:36:20.625 Total : 72507.26 849.69 10567.81 7835.65 41741.24 00:36:20.625 00:36:20.625 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:36:20.625 ================================================================================= 00:36:20.625 1.00000% : 8301.227us 00:36:20.625 10.00000% : 8800.549us 00:36:20.625 25.00000% : 9175.040us 00:36:20.625 50.00000% : 9986.438us 00:36:20.625 75.00000% : 11297.158us 00:36:20.625 90.00000% : 12483.048us 00:36:20.625 95.00000% : 13856.183us 00:36:20.625 98.00000% : 15416.564us 00:36:20.625 99.00000% : 31956.602us 00:36:20.625 99.50000% : 39696.091us 00:36:20.625 99.90000% : 41443.718us 00:36:20.625 99.99000% : 41693.379us 00:36:20.625 99.99900% : 41943.040us 00:36:20.625 99.99990% : 41943.040us 00:36:20.625 99.99999% : 41943.040us 00:36:20.625 00:36:20.625 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:36:20.625 ================================================================================= 00:36:20.625 1.00000% : 8363.642us 00:36:20.625 10.00000% : 8862.964us 00:36:20.625 25.00000% : 9175.040us 00:36:20.625 50.00000% : 9924.023us 00:36:20.625 75.00000% : 11297.158us 00:36:20.625 90.00000% : 12420.632us 00:36:20.625 95.00000% : 14105.844us 00:36:20.625 98.00000% : 15291.733us 00:36:20.625 99.00000% : 30084.145us 00:36:20.625 99.50000% : 37199.482us 00:36:20.625 99.90000% : 38697.448us 00:36:20.625 99.99000% : 39196.770us 00:36:20.625 99.99900% : 39196.770us 00:36:20.625 99.99990% : 39196.770us 00:36:20.625 99.99999% : 39196.770us 00:36:20.625 00:36:20.625 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:36:20.625 ================================================================================= 00:36:20.625 1.00000% : 8363.642us 00:36:20.625 10.00000% : 8862.964us 00:36:20.625 25.00000% : 9237.455us 00:36:20.625 50.00000% : 9924.023us 00:36:20.625 75.00000% : 11297.158us 00:36:20.625 90.00000% : 12358.217us 00:36:20.625 95.00000% : 14105.844us 00:36:20.625 98.00000% : 15541.394us 00:36:20.625 99.00000% : 28086.857us 00:36:20.625 99.50000% : 34952.533us 00:36:20.625 99.90000% : 36700.160us 00:36:20.625 99.99000% : 36949.821us 00:36:20.625 99.99900% : 37199.482us 00:36:20.625 99.99990% : 37199.482us 00:36:20.625 99.99999% : 37199.482us 00:36:20.625 00:36:20.625 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:36:20.625 ================================================================================= 00:36:20.625 1.00000% : 8363.642us 00:36:20.625 10.00000% : 8862.964us 00:36:20.625 25.00000% : 9237.455us 00:36:20.625 50.00000% : 9986.438us 00:36:20.625 75.00000% : 11234.743us 00:36:20.625 90.00000% : 12420.632us 00:36:20.625 95.00000% : 13981.013us 00:36:20.625 98.00000% : 15728.640us 00:36:20.625 99.00000% : 24966.095us 00:36:20.625 99.50000% : 31956.602us 00:36:20.625 99.90000% : 33953.890us 00:36:20.625 99.99000% : 34453.211us 00:36:20.625 99.99900% : 34453.211us 00:36:20.625 99.99990% : 34453.211us 00:36:20.625 99.99999% : 34453.211us 00:36:20.625 00:36:20.625 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:36:20.625 ================================================================================= 00:36:20.625 1.00000% : 8363.642us 00:36:20.625 10.00000% : 8862.964us 00:36:20.625 25.00000% : 9237.455us 00:36:20.625 50.00000% : 9924.023us 00:36:20.625 75.00000% : 11297.158us 00:36:20.625 90.00000% : 12607.878us 00:36:20.625 95.00000% : 13918.598us 00:36:20.625 98.00000% : 15728.640us 00:36:20.625 99.00000% : 22094.994us 00:36:20.625 99.50000% : 28960.670us 00:36:20.625 99.90000% : 30833.128us 00:36:20.625 99.99000% : 31207.619us 00:36:20.625 99.99900% : 31207.619us 00:36:20.625 99.99990% : 31207.619us 00:36:20.625 99.99999% : 31207.619us 00:36:20.625 00:36:20.625 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:36:20.625 ================================================================================= 00:36:20.625 1.00000% : 8363.642us 00:36:20.625 10.00000% : 8862.964us 00:36:20.625 25.00000% : 9175.040us 00:36:20.625 50.00000% : 9924.023us 00:36:20.625 75.00000% : 11297.158us 00:36:20.625 90.00000% : 12607.878us 00:36:20.625 95.00000% : 13918.598us 00:36:20.625 98.00000% : 15791.055us 00:36:20.625 99.00000% : 18849.402us 00:36:20.625 99.50000% : 26214.400us 00:36:20.625 99.90000% : 28211.688us 00:36:20.625 99.99000% : 28586.179us 00:36:20.625 99.99900% : 28586.179us 00:36:20.625 99.99990% : 28586.179us 00:36:20.625 99.99999% : 28586.179us 00:36:20.625 00:36:20.625 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:36:20.626 ============================================================================== 00:36:20.626 Range in us Cumulative IO count 00:36:20.626 7833.112 - 7864.320: 0.0248% ( 3) 00:36:20.626 7864.320 - 7895.528: 0.0413% ( 2) 00:36:20.626 7895.528 - 7926.735: 0.0496% ( 1) 00:36:20.626 7926.735 - 7957.943: 0.0827% ( 4) 00:36:20.626 7957.943 - 7989.150: 0.1323% ( 6) 00:36:20.626 7989.150 - 8051.566: 0.2480% ( 14) 00:36:20.626 8051.566 - 8113.981: 0.3886% ( 17) 00:36:20.626 8113.981 - 8176.396: 0.5704% ( 22) 00:36:20.626 8176.396 - 8238.811: 0.8102% ( 29) 00:36:20.626 8238.811 - 8301.227: 1.1326% ( 39) 00:36:20.626 8301.227 - 8363.642: 1.5542% ( 51) 00:36:20.626 8363.642 - 8426.057: 2.1577% ( 73) 00:36:20.626 8426.057 - 8488.472: 3.0010% ( 102) 00:36:20.626 8488.472 - 8550.888: 4.1253% ( 136) 00:36:20.626 8550.888 - 8613.303: 5.4812% ( 164) 00:36:20.626 8613.303 - 8675.718: 7.0767% ( 193) 00:36:20.626 8675.718 - 8738.133: 8.8459% ( 214) 00:36:20.626 8738.133 - 8800.549: 10.8466% ( 242) 00:36:20.626 8800.549 - 8862.964: 12.8555% ( 243) 00:36:20.626 8862.964 - 8925.379: 15.2034% ( 284) 00:36:20.626 8925.379 - 8987.794: 17.6091% ( 291) 00:36:20.626 8987.794 - 9050.210: 20.1141% ( 303) 00:36:20.626 9050.210 - 9112.625: 22.7265% ( 316) 00:36:20.626 9112.625 - 9175.040: 25.1323% ( 291) 00:36:20.626 9175.040 - 9237.455: 27.7116% ( 312) 00:36:20.626 9237.455 - 9299.870: 30.3571% ( 320) 00:36:20.626 9299.870 - 9362.286: 32.8952% ( 307) 00:36:20.626 9362.286 - 9424.701: 35.3092% ( 292) 00:36:20.626 9424.701 - 9487.116: 37.8886% ( 312) 00:36:20.626 9487.116 - 9549.531: 39.9884% ( 254) 00:36:20.626 9549.531 - 9611.947: 42.0056% ( 244) 00:36:20.626 9611.947 - 9674.362: 43.8905% ( 228) 00:36:20.626 9674.362 - 9736.777: 45.4944% ( 194) 00:36:20.626 9736.777 - 9799.192: 46.9246% ( 173) 00:36:20.626 9799.192 - 9861.608: 48.1729% ( 151) 00:36:20.626 9861.608 - 9924.023: 49.4792% ( 158) 00:36:20.626 9924.023 - 9986.438: 50.8019% ( 160) 00:36:20.626 9986.438 - 10048.853: 51.9428% ( 138) 00:36:20.626 10048.853 - 10111.269: 52.9679% ( 124) 00:36:20.626 10111.269 - 10173.684: 54.0923% ( 136) 00:36:20.626 10173.684 - 10236.099: 55.1505% ( 128) 00:36:20.626 10236.099 - 10298.514: 56.1839% ( 125) 00:36:20.626 10298.514 - 10360.930: 57.2173% ( 125) 00:36:20.626 10360.930 - 10423.345: 58.1267% ( 110) 00:36:20.626 10423.345 - 10485.760: 59.1187% ( 120) 00:36:20.626 10485.760 - 10548.175: 60.1521% ( 125) 00:36:20.626 10548.175 - 10610.590: 61.2847% ( 137) 00:36:20.626 10610.590 - 10673.006: 62.4669% ( 143) 00:36:20.626 10673.006 - 10735.421: 63.6657% ( 145) 00:36:20.626 10735.421 - 10797.836: 64.9471% ( 155) 00:36:20.626 10797.836 - 10860.251: 66.4269% ( 179) 00:36:20.626 10860.251 - 10922.667: 67.8158% ( 168) 00:36:20.626 10922.667 - 10985.082: 69.1964% ( 167) 00:36:20.626 10985.082 - 11047.497: 70.5440% ( 163) 00:36:20.626 11047.497 - 11109.912: 71.8998% ( 164) 00:36:20.626 11109.912 - 11172.328: 73.1233% ( 148) 00:36:20.626 11172.328 - 11234.743: 74.3634% ( 150) 00:36:20.626 11234.743 - 11297.158: 75.6862% ( 160) 00:36:20.626 11297.158 - 11359.573: 76.8436% ( 140) 00:36:20.626 11359.573 - 11421.989: 78.0258% ( 143) 00:36:20.626 11421.989 - 11484.404: 79.2080% ( 143) 00:36:20.626 11484.404 - 11546.819: 80.3985% ( 144) 00:36:20.626 11546.819 - 11609.234: 81.4732% ( 130) 00:36:20.626 11609.234 - 11671.650: 82.4983% ( 124) 00:36:20.626 11671.650 - 11734.065: 83.4243% ( 112) 00:36:20.626 11734.065 - 11796.480: 84.3337% ( 110) 00:36:20.626 11796.480 - 11858.895: 85.1769% ( 102) 00:36:20.626 11858.895 - 11921.310: 86.0284% ( 103) 00:36:20.626 11921.310 - 11983.726: 86.7063% ( 82) 00:36:20.626 11983.726 - 12046.141: 87.3264% ( 75) 00:36:20.626 12046.141 - 12108.556: 87.8390% ( 62) 00:36:20.626 12108.556 - 12170.971: 88.2523% ( 50) 00:36:20.626 12170.971 - 12233.387: 88.6822% ( 52) 00:36:20.626 12233.387 - 12295.802: 89.0790% ( 48) 00:36:20.626 12295.802 - 12358.217: 89.4593% ( 46) 00:36:20.626 12358.217 - 12420.632: 89.7404% ( 34) 00:36:20.626 12420.632 - 12483.048: 90.0711% ( 40) 00:36:20.626 12483.048 - 12545.463: 90.3853% ( 38) 00:36:20.626 12545.463 - 12607.878: 90.7077% ( 39) 00:36:20.626 12607.878 - 12670.293: 91.0053% ( 36) 00:36:20.626 12670.293 - 12732.709: 91.3194% ( 38) 00:36:20.626 12732.709 - 12795.124: 91.6253% ( 37) 00:36:20.626 12795.124 - 12857.539: 91.8899% ( 32) 00:36:20.626 12857.539 - 12919.954: 92.1462% ( 31) 00:36:20.626 12919.954 - 12982.370: 92.3776% ( 28) 00:36:20.626 12982.370 - 13044.785: 92.6091% ( 28) 00:36:20.626 13044.785 - 13107.200: 92.8158% ( 25) 00:36:20.626 13107.200 - 13169.615: 93.0390% ( 27) 00:36:20.626 13169.615 - 13232.030: 93.2540% ( 26) 00:36:20.626 13232.030 - 13294.446: 93.4193% ( 20) 00:36:20.626 13294.446 - 13356.861: 93.5764% ( 19) 00:36:20.626 13356.861 - 13419.276: 93.7583% ( 22) 00:36:20.626 13419.276 - 13481.691: 93.9732% ( 26) 00:36:20.626 13481.691 - 13544.107: 94.1303% ( 19) 00:36:20.626 13544.107 - 13606.522: 94.3370% ( 25) 00:36:20.626 13606.522 - 13668.937: 94.5188% ( 22) 00:36:20.626 13668.937 - 13731.352: 94.6842% ( 20) 00:36:20.626 13731.352 - 13793.768: 94.8743% ( 23) 00:36:20.626 13793.768 - 13856.183: 95.0066% ( 16) 00:36:20.626 13856.183 - 13918.598: 95.1802% ( 21) 00:36:20.626 13918.598 - 13981.013: 95.3373% ( 19) 00:36:20.626 13981.013 - 14043.429: 95.5026% ( 20) 00:36:20.626 14043.429 - 14105.844: 95.6184% ( 14) 00:36:20.626 14105.844 - 14168.259: 95.7507% ( 16) 00:36:20.626 14168.259 - 14230.674: 95.8995% ( 18) 00:36:20.626 14230.674 - 14293.090: 96.0483% ( 18) 00:36:20.626 14293.090 - 14355.505: 96.1723% ( 15) 00:36:20.626 14355.505 - 14417.920: 96.2880% ( 14) 00:36:20.626 14417.920 - 14480.335: 96.4120% ( 15) 00:36:20.626 14480.335 - 14542.750: 96.5278% ( 14) 00:36:20.626 14542.750 - 14605.166: 96.6766% ( 18) 00:36:20.626 14605.166 - 14667.581: 96.8089% ( 16) 00:36:20.626 14667.581 - 14729.996: 96.9577% ( 18) 00:36:20.626 14729.996 - 14792.411: 97.0982% ( 17) 00:36:20.626 14792.411 - 14854.827: 97.2388% ( 17) 00:36:20.626 14854.827 - 14917.242: 97.3380% ( 12) 00:36:20.626 14917.242 - 14979.657: 97.4537% ( 14) 00:36:20.626 14979.657 - 15042.072: 97.5612% ( 13) 00:36:20.626 15042.072 - 15104.488: 97.6521% ( 11) 00:36:20.626 15104.488 - 15166.903: 97.7596% ( 13) 00:36:20.626 15166.903 - 15229.318: 97.8671% ( 13) 00:36:20.626 15229.318 - 15291.733: 97.9084% ( 5) 00:36:20.626 15291.733 - 15354.149: 97.9911% ( 10) 00:36:20.626 15354.149 - 15416.564: 98.0820% ( 11) 00:36:20.626 15416.564 - 15478.979: 98.1316% ( 6) 00:36:20.626 15478.979 - 15541.394: 98.1978% ( 8) 00:36:20.626 15541.394 - 15603.810: 98.2722% ( 9) 00:36:20.626 15603.810 - 15666.225: 98.3135% ( 5) 00:36:20.626 15666.225 - 15728.640: 98.3466% ( 4) 00:36:20.626 15728.640 - 15791.055: 98.3962% ( 6) 00:36:20.626 15791.055 - 15853.470: 98.4375% ( 5) 00:36:20.626 15853.470 - 15915.886: 98.4871% ( 6) 00:36:20.626 15915.886 - 15978.301: 98.5284% ( 5) 00:36:20.626 15978.301 - 16103.131: 98.6028% ( 9) 00:36:20.626 16103.131 - 16227.962: 98.7021% ( 12) 00:36:20.626 16227.962 - 16352.792: 98.7434% ( 5) 00:36:20.626 16352.792 - 16477.623: 98.7847% ( 5) 00:36:20.626 16477.623 - 16602.453: 98.8343% ( 6) 00:36:20.626 16602.453 - 16727.284: 98.8839% ( 6) 00:36:20.626 16727.284 - 16852.114: 98.9253% ( 5) 00:36:20.626 16852.114 - 16976.945: 98.9418% ( 2) 00:36:20.626 31582.110 - 31706.941: 98.9583% ( 2) 00:36:20.626 31706.941 - 31831.771: 98.9831% ( 3) 00:36:20.626 31831.771 - 31956.602: 99.0162% ( 4) 00:36:20.626 31956.602 - 32206.263: 99.0741% ( 7) 00:36:20.626 32206.263 - 32455.924: 99.1237% ( 6) 00:36:20.626 32455.924 - 32705.585: 99.1815% ( 7) 00:36:20.626 32705.585 - 32955.246: 99.2394% ( 7) 00:36:20.626 32955.246 - 33204.907: 99.3056% ( 8) 00:36:20.626 33204.907 - 33454.568: 99.3552% ( 6) 00:36:20.626 33454.568 - 33704.229: 99.4048% ( 6) 00:36:20.626 33704.229 - 33953.890: 99.4626% ( 7) 00:36:20.626 33953.890 - 34203.550: 99.4709% ( 1) 00:36:20.626 39196.770 - 39446.430: 99.4792% ( 1) 00:36:20.626 39446.430 - 39696.091: 99.5370% ( 7) 00:36:20.626 39696.091 - 39945.752: 99.6032% ( 8) 00:36:20.626 39945.752 - 40195.413: 99.6528% ( 6) 00:36:20.626 40195.413 - 40445.074: 99.7024% ( 6) 00:36:20.626 40445.074 - 40694.735: 99.7603% ( 7) 00:36:20.626 40694.735 - 40944.396: 99.8181% ( 7) 00:36:20.626 40944.396 - 41194.057: 99.8760% ( 7) 00:36:20.626 41194.057 - 41443.718: 99.9339% ( 7) 00:36:20.626 41443.718 - 41693.379: 99.9917% ( 7) 00:36:20.626 41693.379 - 41943.040: 100.0000% ( 1) 00:36:20.626 00:36:20.626 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:36:20.626 ============================================================================== 00:36:20.626 Range in us Cumulative IO count 00:36:20.626 7957.943 - 7989.150: 0.0165% ( 2) 00:36:20.626 7989.150 - 8051.566: 0.0661% ( 6) 00:36:20.626 8051.566 - 8113.981: 0.1653% ( 12) 00:36:20.626 8113.981 - 8176.396: 0.3472% ( 22) 00:36:20.626 8176.396 - 8238.811: 0.5622% ( 26) 00:36:20.626 8238.811 - 8301.227: 0.8515% ( 35) 00:36:20.626 8301.227 - 8363.642: 1.2401% ( 47) 00:36:20.626 8363.642 - 8426.057: 1.7196% ( 58) 00:36:20.626 8426.057 - 8488.472: 2.4306% ( 86) 00:36:20.626 8488.472 - 8550.888: 3.3896% ( 116) 00:36:20.626 8550.888 - 8613.303: 4.5470% ( 140) 00:36:20.626 8613.303 - 8675.718: 6.0681% ( 184) 00:36:20.626 8675.718 - 8738.133: 7.7133% ( 199) 00:36:20.626 8738.133 - 8800.549: 9.6561% ( 235) 00:36:20.626 8800.549 - 8862.964: 11.8138% ( 261) 00:36:20.626 8862.964 - 8925.379: 14.1865% ( 287) 00:36:20.626 8925.379 - 8987.794: 16.7741% ( 313) 00:36:20.626 8987.794 - 9050.210: 19.4940% ( 329) 00:36:20.627 9050.210 - 9112.625: 22.2801% ( 337) 00:36:20.627 9112.625 - 9175.040: 25.1075% ( 342) 00:36:20.627 9175.040 - 9237.455: 27.9183% ( 340) 00:36:20.627 9237.455 - 9299.870: 30.6878% ( 335) 00:36:20.627 9299.870 - 9362.286: 33.3829% ( 326) 00:36:20.627 9362.286 - 9424.701: 36.0202% ( 319) 00:36:20.627 9424.701 - 9487.116: 38.3763% ( 285) 00:36:20.627 9487.116 - 9549.531: 40.6581% ( 276) 00:36:20.627 9549.531 - 9611.947: 42.8158% ( 261) 00:36:20.627 9611.947 - 9674.362: 44.7007% ( 228) 00:36:20.627 9674.362 - 9736.777: 46.3872% ( 204) 00:36:20.627 9736.777 - 9799.192: 47.8092% ( 172) 00:36:20.627 9799.192 - 9861.608: 49.0989% ( 156) 00:36:20.627 9861.608 - 9924.023: 50.1240% ( 124) 00:36:20.627 9924.023 - 9986.438: 51.0334% ( 110) 00:36:20.627 9986.438 - 10048.853: 51.8932% ( 104) 00:36:20.627 10048.853 - 10111.269: 52.7778% ( 107) 00:36:20.627 10111.269 - 10173.684: 53.7202% ( 114) 00:36:20.627 10173.684 - 10236.099: 54.7206% ( 121) 00:36:20.627 10236.099 - 10298.514: 55.6217% ( 109) 00:36:20.627 10298.514 - 10360.930: 56.6303% ( 122) 00:36:20.627 10360.930 - 10423.345: 57.5479% ( 111) 00:36:20.627 10423.345 - 10485.760: 58.5235% ( 118) 00:36:20.627 10485.760 - 10548.175: 59.5734% ( 127) 00:36:20.627 10548.175 - 10610.590: 60.7060% ( 137) 00:36:20.627 10610.590 - 10673.006: 61.8386% ( 137) 00:36:20.627 10673.006 - 10735.421: 63.0870% ( 151) 00:36:20.627 10735.421 - 10797.836: 64.3849% ( 157) 00:36:20.627 10797.836 - 10860.251: 65.8813% ( 181) 00:36:20.627 10860.251 - 10922.667: 67.4107% ( 185) 00:36:20.627 10922.667 - 10985.082: 68.9071% ( 181) 00:36:20.627 10985.082 - 11047.497: 70.4200% ( 183) 00:36:20.627 11047.497 - 11109.912: 71.8502% ( 173) 00:36:20.627 11109.912 - 11172.328: 73.2391% ( 168) 00:36:20.627 11172.328 - 11234.743: 74.6693% ( 173) 00:36:20.627 11234.743 - 11297.158: 76.0582% ( 168) 00:36:20.627 11297.158 - 11359.573: 77.4471% ( 168) 00:36:20.627 11359.573 - 11421.989: 78.7368% ( 156) 00:36:20.627 11421.989 - 11484.404: 79.9190% ( 143) 00:36:20.627 11484.404 - 11546.819: 81.0351% ( 135) 00:36:20.627 11546.819 - 11609.234: 82.1842% ( 139) 00:36:20.627 11609.234 - 11671.650: 83.2093% ( 124) 00:36:20.627 11671.650 - 11734.065: 84.1931% ( 119) 00:36:20.627 11734.065 - 11796.480: 85.0860% ( 108) 00:36:20.627 11796.480 - 11858.895: 85.7970% ( 86) 00:36:20.627 11858.895 - 11921.310: 86.4997% ( 85) 00:36:20.627 11921.310 - 11983.726: 87.1114% ( 74) 00:36:20.627 11983.726 - 12046.141: 87.7397% ( 76) 00:36:20.627 12046.141 - 12108.556: 88.2688% ( 64) 00:36:20.627 12108.556 - 12170.971: 88.7235% ( 55) 00:36:20.627 12170.971 - 12233.387: 89.1534% ( 52) 00:36:20.627 12233.387 - 12295.802: 89.5585% ( 49) 00:36:20.627 12295.802 - 12358.217: 89.8975% ( 41) 00:36:20.627 12358.217 - 12420.632: 90.2530% ( 43) 00:36:20.627 12420.632 - 12483.048: 90.5754% ( 39) 00:36:20.627 12483.048 - 12545.463: 90.8896% ( 38) 00:36:20.627 12545.463 - 12607.878: 91.1789% ( 35) 00:36:20.627 12607.878 - 12670.293: 91.4517% ( 33) 00:36:20.627 12670.293 - 12732.709: 91.7328% ( 34) 00:36:20.627 12732.709 - 12795.124: 91.9726% ( 29) 00:36:20.627 12795.124 - 12857.539: 92.2123% ( 29) 00:36:20.627 12857.539 - 12919.954: 92.3859% ( 21) 00:36:20.627 12919.954 - 12982.370: 92.5678% ( 22) 00:36:20.627 12982.370 - 13044.785: 92.7497% ( 22) 00:36:20.627 13044.785 - 13107.200: 92.9067% ( 19) 00:36:20.627 13107.200 - 13169.615: 93.0638% ( 19) 00:36:20.627 13169.615 - 13232.030: 93.1961% ( 16) 00:36:20.627 13232.030 - 13294.446: 93.3201% ( 15) 00:36:20.627 13294.446 - 13356.861: 93.4606% ( 17) 00:36:20.627 13356.861 - 13419.276: 93.5929% ( 16) 00:36:20.627 13419.276 - 13481.691: 93.7417% ( 18) 00:36:20.627 13481.691 - 13544.107: 93.8409% ( 12) 00:36:20.627 13544.107 - 13606.522: 93.9319% ( 11) 00:36:20.627 13606.522 - 13668.937: 94.0394% ( 13) 00:36:20.627 13668.937 - 13731.352: 94.1468% ( 13) 00:36:20.627 13731.352 - 13793.768: 94.2460% ( 12) 00:36:20.627 13793.768 - 13856.183: 94.3700% ( 15) 00:36:20.627 13856.183 - 13918.598: 94.5354% ( 20) 00:36:20.627 13918.598 - 13981.013: 94.7173% ( 22) 00:36:20.627 13981.013 - 14043.429: 94.8826% ( 20) 00:36:20.627 14043.429 - 14105.844: 95.0397% ( 19) 00:36:20.627 14105.844 - 14168.259: 95.1885% ( 18) 00:36:20.627 14168.259 - 14230.674: 95.3290% ( 17) 00:36:20.627 14230.674 - 14293.090: 95.4365% ( 13) 00:36:20.627 14293.090 - 14355.505: 95.5853% ( 18) 00:36:20.627 14355.505 - 14417.920: 95.7672% ( 22) 00:36:20.627 14417.920 - 14480.335: 95.9243% ( 19) 00:36:20.627 14480.335 - 14542.750: 96.1062% ( 22) 00:36:20.627 14542.750 - 14605.166: 96.3046% ( 24) 00:36:20.627 14605.166 - 14667.581: 96.4699% ( 20) 00:36:20.627 14667.581 - 14729.996: 96.6270% ( 19) 00:36:20.627 14729.996 - 14792.411: 96.8089% ( 22) 00:36:20.627 14792.411 - 14854.827: 96.9825% ( 21) 00:36:20.627 14854.827 - 14917.242: 97.1726% ( 23) 00:36:20.627 14917.242 - 14979.657: 97.3545% ( 22) 00:36:20.627 14979.657 - 15042.072: 97.5281% ( 21) 00:36:20.627 15042.072 - 15104.488: 97.6852% ( 19) 00:36:20.627 15104.488 - 15166.903: 97.8009% ( 14) 00:36:20.627 15166.903 - 15229.318: 97.9001% ( 12) 00:36:20.627 15229.318 - 15291.733: 98.0159% ( 14) 00:36:20.627 15291.733 - 15354.149: 98.0985% ( 10) 00:36:20.627 15354.149 - 15416.564: 98.1647% ( 8) 00:36:20.627 15416.564 - 15478.979: 98.2060% ( 5) 00:36:20.627 15478.979 - 15541.394: 98.2639% ( 7) 00:36:20.627 15541.394 - 15603.810: 98.3135% ( 6) 00:36:20.627 15603.810 - 15666.225: 98.3714% ( 7) 00:36:20.627 15666.225 - 15728.640: 98.4375% ( 8) 00:36:20.627 15728.640 - 15791.055: 98.4954% ( 7) 00:36:20.627 15791.055 - 15853.470: 98.5450% ( 6) 00:36:20.627 15853.470 - 15915.886: 98.5946% ( 6) 00:36:20.627 15915.886 - 15978.301: 98.6194% ( 3) 00:36:20.627 15978.301 - 16103.131: 98.6690% ( 6) 00:36:20.627 16103.131 - 16227.962: 98.7103% ( 5) 00:36:20.627 16227.962 - 16352.792: 98.7599% ( 6) 00:36:20.627 16352.792 - 16477.623: 98.8178% ( 7) 00:36:20.627 16477.623 - 16602.453: 98.8757% ( 7) 00:36:20.627 16602.453 - 16727.284: 98.9253% ( 6) 00:36:20.627 16727.284 - 16852.114: 98.9418% ( 2) 00:36:20.627 29709.653 - 29834.484: 98.9749% ( 4) 00:36:20.627 29834.484 - 29959.314: 98.9997% ( 3) 00:36:20.627 29959.314 - 30084.145: 99.0327% ( 4) 00:36:20.627 30084.145 - 30208.975: 99.0575% ( 3) 00:36:20.627 30208.975 - 30333.806: 99.0823% ( 3) 00:36:20.627 30333.806 - 30458.636: 99.1154% ( 4) 00:36:20.627 30458.636 - 30583.467: 99.1485% ( 4) 00:36:20.627 30583.467 - 30708.297: 99.1733% ( 3) 00:36:20.627 30708.297 - 30833.128: 99.1981% ( 3) 00:36:20.627 30833.128 - 30957.958: 99.2229% ( 3) 00:36:20.627 30957.958 - 31082.789: 99.2560% ( 4) 00:36:20.627 31082.789 - 31207.619: 99.2808% ( 3) 00:36:20.627 31207.619 - 31332.450: 99.3138% ( 4) 00:36:20.627 31332.450 - 31457.280: 99.3469% ( 4) 00:36:20.627 31457.280 - 31582.110: 99.3634% ( 2) 00:36:20.627 31582.110 - 31706.941: 99.3965% ( 4) 00:36:20.627 31706.941 - 31831.771: 99.4296% ( 4) 00:36:20.627 31831.771 - 31956.602: 99.4544% ( 3) 00:36:20.627 31956.602 - 32206.263: 99.4709% ( 2) 00:36:20.627 36700.160 - 36949.821: 99.4957% ( 3) 00:36:20.627 36949.821 - 37199.482: 99.5453% ( 6) 00:36:20.627 37199.482 - 37449.143: 99.6032% ( 7) 00:36:20.627 37449.143 - 37698.804: 99.6610% ( 7) 00:36:20.627 37698.804 - 37948.465: 99.7272% ( 8) 00:36:20.627 37948.465 - 38198.126: 99.7851% ( 7) 00:36:20.627 38198.126 - 38447.787: 99.8429% ( 7) 00:36:20.627 38447.787 - 38697.448: 99.9008% ( 7) 00:36:20.627 38697.448 - 38947.109: 99.9587% ( 7) 00:36:20.627 38947.109 - 39196.770: 100.0000% ( 5) 00:36:20.627 00:36:20.627 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:36:20.627 ============================================================================== 00:36:20.627 Range in us Cumulative IO count 00:36:20.627 7895.528 - 7926.735: 0.0165% ( 2) 00:36:20.627 7926.735 - 7957.943: 0.0413% ( 3) 00:36:20.627 7957.943 - 7989.150: 0.0496% ( 1) 00:36:20.627 7989.150 - 8051.566: 0.1323% ( 10) 00:36:20.627 8051.566 - 8113.981: 0.2397% ( 13) 00:36:20.627 8113.981 - 8176.396: 0.3803% ( 17) 00:36:20.627 8176.396 - 8238.811: 0.5456% ( 20) 00:36:20.627 8238.811 - 8301.227: 0.7771% ( 28) 00:36:20.627 8301.227 - 8363.642: 1.1987% ( 51) 00:36:20.627 8363.642 - 8426.057: 1.8684% ( 81) 00:36:20.627 8426.057 - 8488.472: 2.7199% ( 103) 00:36:20.627 8488.472 - 8550.888: 3.7864% ( 129) 00:36:20.627 8550.888 - 8613.303: 5.0265% ( 150) 00:36:20.627 8613.303 - 8675.718: 6.4236% ( 169) 00:36:20.627 8675.718 - 8738.133: 8.0192% ( 193) 00:36:20.627 8738.133 - 8800.549: 9.7470% ( 209) 00:36:20.627 8800.549 - 8862.964: 11.9213% ( 263) 00:36:20.627 8862.964 - 8925.379: 14.2692% ( 284) 00:36:20.627 8925.379 - 8987.794: 16.7328% ( 298) 00:36:20.627 8987.794 - 9050.210: 19.3618% ( 318) 00:36:20.627 9050.210 - 9112.625: 22.1147% ( 333) 00:36:20.627 9112.625 - 9175.040: 24.9669% ( 345) 00:36:20.627 9175.040 - 9237.455: 27.7530% ( 337) 00:36:20.627 9237.455 - 9299.870: 30.5721% ( 341) 00:36:20.627 9299.870 - 9362.286: 33.1845% ( 316) 00:36:20.627 9362.286 - 9424.701: 35.7639% ( 312) 00:36:20.627 9424.701 - 9487.116: 38.0704% ( 279) 00:36:20.627 9487.116 - 9549.531: 40.3274% ( 273) 00:36:20.627 9549.531 - 9611.947: 42.3776% ( 248) 00:36:20.627 9611.947 - 9674.362: 44.3122% ( 234) 00:36:20.627 9674.362 - 9736.777: 46.1723% ( 225) 00:36:20.627 9736.777 - 9799.192: 47.6935% ( 184) 00:36:20.627 9799.192 - 9861.608: 49.0410% ( 163) 00:36:20.627 9861.608 - 9924.023: 50.3142% ( 154) 00:36:20.627 9924.023 - 9986.438: 51.3641% ( 127) 00:36:20.627 9986.438 - 10048.853: 52.3892% ( 124) 00:36:20.627 10048.853 - 10111.269: 53.3317% ( 114) 00:36:20.627 10111.269 - 10173.684: 54.2493% ( 111) 00:36:20.627 10173.684 - 10236.099: 55.2249% ( 118) 00:36:20.628 10236.099 - 10298.514: 56.1177% ( 108) 00:36:20.628 10298.514 - 10360.930: 56.9527% ( 101) 00:36:20.628 10360.930 - 10423.345: 57.8538% ( 109) 00:36:20.628 10423.345 - 10485.760: 58.8294% ( 118) 00:36:20.628 10485.760 - 10548.175: 59.9206% ( 132) 00:36:20.628 10548.175 - 10610.590: 61.0367% ( 135) 00:36:20.628 10610.590 - 10673.006: 62.1858% ( 139) 00:36:20.628 10673.006 - 10735.421: 63.4094% ( 148) 00:36:20.628 10735.421 - 10797.836: 64.6495% ( 150) 00:36:20.628 10797.836 - 10860.251: 65.9144% ( 153) 00:36:20.628 10860.251 - 10922.667: 67.2784% ( 165) 00:36:20.628 10922.667 - 10985.082: 68.6508% ( 166) 00:36:20.628 10985.082 - 11047.497: 70.1389% ( 180) 00:36:20.628 11047.497 - 11109.912: 71.5774% ( 174) 00:36:20.628 11109.912 - 11172.328: 73.0985% ( 184) 00:36:20.628 11172.328 - 11234.743: 74.4792% ( 167) 00:36:20.628 11234.743 - 11297.158: 75.8598% ( 167) 00:36:20.628 11297.158 - 11359.573: 77.2156% ( 164) 00:36:20.628 11359.573 - 11421.989: 78.4970% ( 155) 00:36:20.628 11421.989 - 11484.404: 79.7123% ( 147) 00:36:20.628 11484.404 - 11546.819: 80.7870% ( 130) 00:36:20.628 11546.819 - 11609.234: 81.9031% ( 135) 00:36:20.628 11609.234 - 11671.650: 82.8786% ( 118) 00:36:20.628 11671.650 - 11734.065: 83.8707% ( 120) 00:36:20.628 11734.065 - 11796.480: 84.7140% ( 102) 00:36:20.628 11796.480 - 11858.895: 85.5655% ( 103) 00:36:20.628 11858.895 - 11921.310: 86.3261% ( 92) 00:36:20.628 11921.310 - 11983.726: 87.1032% ( 94) 00:36:20.628 11983.726 - 12046.141: 87.7397% ( 77) 00:36:20.628 12046.141 - 12108.556: 88.3598% ( 75) 00:36:20.628 12108.556 - 12170.971: 88.8476% ( 59) 00:36:20.628 12170.971 - 12233.387: 89.2940% ( 54) 00:36:20.628 12233.387 - 12295.802: 89.6908% ( 48) 00:36:20.628 12295.802 - 12358.217: 90.0215% ( 40) 00:36:20.628 12358.217 - 12420.632: 90.3687% ( 42) 00:36:20.628 12420.632 - 12483.048: 90.6994% ( 40) 00:36:20.628 12483.048 - 12545.463: 90.9805% ( 34) 00:36:20.628 12545.463 - 12607.878: 91.2368% ( 31) 00:36:20.628 12607.878 - 12670.293: 91.4848% ( 30) 00:36:20.628 12670.293 - 12732.709: 91.7328% ( 30) 00:36:20.628 12732.709 - 12795.124: 91.9891% ( 31) 00:36:20.628 12795.124 - 12857.539: 92.2288% ( 29) 00:36:20.628 12857.539 - 12919.954: 92.4521% ( 27) 00:36:20.628 12919.954 - 12982.370: 92.6753% ( 27) 00:36:20.628 12982.370 - 13044.785: 92.8654% ( 23) 00:36:20.628 13044.785 - 13107.200: 93.0638% ( 24) 00:36:20.628 13107.200 - 13169.615: 93.2622% ( 24) 00:36:20.628 13169.615 - 13232.030: 93.4358% ( 21) 00:36:20.628 13232.030 - 13294.446: 93.5681% ( 16) 00:36:20.628 13294.446 - 13356.861: 93.6839% ( 14) 00:36:20.628 13356.861 - 13419.276: 93.7913% ( 13) 00:36:20.628 13419.276 - 13481.691: 93.8740% ( 10) 00:36:20.628 13481.691 - 13544.107: 93.9401% ( 8) 00:36:20.628 13544.107 - 13606.522: 94.0394% ( 12) 00:36:20.628 13606.522 - 13668.937: 94.1634% ( 15) 00:36:20.628 13668.937 - 13731.352: 94.2874% ( 15) 00:36:20.628 13731.352 - 13793.768: 94.4279% ( 17) 00:36:20.628 13793.768 - 13856.183: 94.5519% ( 15) 00:36:20.628 13856.183 - 13918.598: 94.6925% ( 17) 00:36:20.628 13918.598 - 13981.013: 94.8247% ( 16) 00:36:20.628 13981.013 - 14043.429: 94.9735% ( 18) 00:36:20.628 14043.429 - 14105.844: 95.0728% ( 12) 00:36:20.628 14105.844 - 14168.259: 95.1802% ( 13) 00:36:20.628 14168.259 - 14230.674: 95.3125% ( 16) 00:36:20.628 14230.674 - 14293.090: 95.4448% ( 16) 00:36:20.628 14293.090 - 14355.505: 95.5688% ( 15) 00:36:20.628 14355.505 - 14417.920: 95.6928% ( 15) 00:36:20.628 14417.920 - 14480.335: 95.8251% ( 16) 00:36:20.628 14480.335 - 14542.750: 95.9656% ( 17) 00:36:20.628 14542.750 - 14605.166: 96.1392% ( 21) 00:36:20.628 14605.166 - 14667.581: 96.3128% ( 21) 00:36:20.628 14667.581 - 14729.996: 96.4947% ( 22) 00:36:20.628 14729.996 - 14792.411: 96.6435% ( 18) 00:36:20.628 14792.411 - 14854.827: 96.7675% ( 15) 00:36:20.628 14854.827 - 14917.242: 96.8998% ( 16) 00:36:20.628 14917.242 - 14979.657: 97.0155% ( 14) 00:36:20.628 14979.657 - 15042.072: 97.1313% ( 14) 00:36:20.628 15042.072 - 15104.488: 97.2388% ( 13) 00:36:20.628 15104.488 - 15166.903: 97.3380% ( 12) 00:36:20.628 15166.903 - 15229.318: 97.4372% ( 12) 00:36:20.628 15229.318 - 15291.733: 97.5364% ( 12) 00:36:20.628 15291.733 - 15354.149: 97.6521% ( 14) 00:36:20.628 15354.149 - 15416.564: 97.8092% ( 19) 00:36:20.628 15416.564 - 15478.979: 97.9249% ( 14) 00:36:20.628 15478.979 - 15541.394: 98.0489% ( 15) 00:36:20.628 15541.394 - 15603.810: 98.1647% ( 14) 00:36:20.628 15603.810 - 15666.225: 98.2639% ( 12) 00:36:20.628 15666.225 - 15728.640: 98.3218% ( 7) 00:36:20.628 15728.640 - 15791.055: 98.3714% ( 6) 00:36:20.628 15791.055 - 15853.470: 98.4292% ( 7) 00:36:20.628 15853.470 - 15915.886: 98.4871% ( 7) 00:36:20.628 15915.886 - 15978.301: 98.5367% ( 6) 00:36:20.628 15978.301 - 16103.131: 98.6442% ( 13) 00:36:20.628 16103.131 - 16227.962: 98.7434% ( 12) 00:36:20.628 16227.962 - 16352.792: 98.8509% ( 13) 00:36:20.628 16352.792 - 16477.623: 98.9087% ( 7) 00:36:20.628 16477.623 - 16602.453: 98.9418% ( 4) 00:36:20.628 27587.535 - 27712.366: 98.9501% ( 1) 00:36:20.628 27712.366 - 27837.196: 98.9749% ( 3) 00:36:20.628 27837.196 - 27962.027: 98.9997% ( 3) 00:36:20.628 27962.027 - 28086.857: 99.0327% ( 4) 00:36:20.628 28086.857 - 28211.688: 99.0493% ( 2) 00:36:20.628 28211.688 - 28336.518: 99.0823% ( 4) 00:36:20.628 28336.518 - 28461.349: 99.1154% ( 4) 00:36:20.628 28461.349 - 28586.179: 99.1485% ( 4) 00:36:20.628 28586.179 - 28711.010: 99.1733% ( 3) 00:36:20.628 28711.010 - 28835.840: 99.2063% ( 4) 00:36:20.628 28835.840 - 28960.670: 99.2312% ( 3) 00:36:20.628 28960.670 - 29085.501: 99.2642% ( 4) 00:36:20.628 29085.501 - 29210.331: 99.2973% ( 4) 00:36:20.628 29210.331 - 29335.162: 99.3221% ( 3) 00:36:20.628 29335.162 - 29459.992: 99.3469% ( 3) 00:36:20.628 29459.992 - 29584.823: 99.3800% ( 4) 00:36:20.628 29584.823 - 29709.653: 99.4130% ( 4) 00:36:20.628 29709.653 - 29834.484: 99.4378% ( 3) 00:36:20.628 29834.484 - 29959.314: 99.4709% ( 4) 00:36:20.628 34702.872 - 34952.533: 99.5205% ( 6) 00:36:20.628 34952.533 - 35202.194: 99.5701% ( 6) 00:36:20.628 35202.194 - 35451.855: 99.6362% ( 8) 00:36:20.628 35451.855 - 35701.516: 99.6941% ( 7) 00:36:20.628 35701.516 - 35951.177: 99.7520% ( 7) 00:36:20.628 35951.177 - 36200.838: 99.8099% ( 7) 00:36:20.628 36200.838 - 36450.499: 99.8677% ( 7) 00:36:20.628 36450.499 - 36700.160: 99.9339% ( 8) 00:36:20.628 36700.160 - 36949.821: 99.9917% ( 7) 00:36:20.628 36949.821 - 37199.482: 100.0000% ( 1) 00:36:20.628 00:36:20.628 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:36:20.628 ============================================================================== 00:36:20.628 Range in us Cumulative IO count 00:36:20.628 7989.150 - 8051.566: 0.0331% ( 4) 00:36:20.628 8051.566 - 8113.981: 0.0744% ( 5) 00:36:20.628 8113.981 - 8176.396: 0.1736% ( 12) 00:36:20.628 8176.396 - 8238.811: 0.3390% ( 20) 00:36:20.628 8238.811 - 8301.227: 0.6200% ( 34) 00:36:20.628 8301.227 - 8363.642: 1.0747% ( 55) 00:36:20.628 8363.642 - 8426.057: 1.6865% ( 74) 00:36:20.628 8426.057 - 8488.472: 2.5298% ( 102) 00:36:20.628 8488.472 - 8550.888: 3.5136% ( 119) 00:36:20.628 8550.888 - 8613.303: 4.6627% ( 139) 00:36:20.628 8613.303 - 8675.718: 6.0929% ( 173) 00:36:20.628 8675.718 - 8738.133: 7.8786% ( 216) 00:36:20.628 8738.133 - 8800.549: 9.7966% ( 232) 00:36:20.628 8800.549 - 8862.964: 11.9792% ( 264) 00:36:20.628 8862.964 - 8925.379: 14.2940% ( 280) 00:36:20.628 8925.379 - 8987.794: 16.7741% ( 300) 00:36:20.628 8987.794 - 9050.210: 19.3948% ( 317) 00:36:20.628 9050.210 - 9112.625: 22.0982% ( 327) 00:36:20.628 9112.625 - 9175.040: 24.8760% ( 336) 00:36:20.628 9175.040 - 9237.455: 27.7447% ( 347) 00:36:20.628 9237.455 - 9299.870: 30.4646% ( 329) 00:36:20.628 9299.870 - 9362.286: 33.1928% ( 330) 00:36:20.628 9362.286 - 9424.701: 35.6812% ( 301) 00:36:20.628 9424.701 - 9487.116: 38.0374% ( 285) 00:36:20.628 9487.116 - 9549.531: 40.2943% ( 273) 00:36:20.628 9549.531 - 9611.947: 42.4521% ( 261) 00:36:20.628 9611.947 - 9674.362: 44.3039% ( 224) 00:36:20.628 9674.362 - 9736.777: 45.9739% ( 202) 00:36:20.628 9736.777 - 9799.192: 47.3628% ( 168) 00:36:20.628 9799.192 - 9861.608: 48.7434% ( 167) 00:36:20.628 9861.608 - 9924.023: 49.9339% ( 144) 00:36:20.628 9924.023 - 9986.438: 51.0665% ( 137) 00:36:20.628 9986.438 - 10048.853: 52.0585% ( 120) 00:36:20.628 10048.853 - 10111.269: 53.0506% ( 120) 00:36:20.628 10111.269 - 10173.684: 53.9021% ( 103) 00:36:20.628 10173.684 - 10236.099: 54.8363% ( 113) 00:36:20.628 10236.099 - 10298.514: 55.7540% ( 111) 00:36:20.628 10298.514 - 10360.930: 56.6551% ( 109) 00:36:20.628 10360.930 - 10423.345: 57.6389% ( 119) 00:36:20.628 10423.345 - 10485.760: 58.5979% ( 116) 00:36:20.628 10485.760 - 10548.175: 59.6065% ( 122) 00:36:20.628 10548.175 - 10610.590: 60.7391% ( 137) 00:36:20.628 10610.590 - 10673.006: 61.8882% ( 139) 00:36:20.628 10673.006 - 10735.421: 63.1696% ( 155) 00:36:20.628 10735.421 - 10797.836: 64.5420% ( 166) 00:36:20.628 10797.836 - 10860.251: 66.0053% ( 177) 00:36:20.628 10860.251 - 10922.667: 67.3942% ( 168) 00:36:20.628 10922.667 - 10985.082: 68.8657% ( 178) 00:36:20.628 10985.082 - 11047.497: 70.3869% ( 184) 00:36:20.628 11047.497 - 11109.912: 71.9494% ( 189) 00:36:20.628 11109.912 - 11172.328: 73.5284% ( 191) 00:36:20.628 11172.328 - 11234.743: 75.0661% ( 186) 00:36:20.628 11234.743 - 11297.158: 76.4798% ( 171) 00:36:20.628 11297.158 - 11359.573: 77.8356% ( 164) 00:36:20.628 11359.573 - 11421.989: 79.0840% ( 151) 00:36:20.628 11421.989 - 11484.404: 80.3075% ( 148) 00:36:20.629 11484.404 - 11546.819: 81.5228% ( 147) 00:36:20.629 11546.819 - 11609.234: 82.5562% ( 125) 00:36:20.629 11609.234 - 11671.650: 83.5896% ( 125) 00:36:20.629 11671.650 - 11734.065: 84.5651% ( 118) 00:36:20.629 11734.065 - 11796.480: 85.4001% ( 101) 00:36:20.629 11796.480 - 11858.895: 86.1194% ( 87) 00:36:20.629 11858.895 - 11921.310: 86.7890% ( 81) 00:36:20.629 11921.310 - 11983.726: 87.3760% ( 71) 00:36:20.629 11983.726 - 12046.141: 87.8886% ( 62) 00:36:20.629 12046.141 - 12108.556: 88.4094% ( 63) 00:36:20.629 12108.556 - 12170.971: 88.8393% ( 52) 00:36:20.629 12170.971 - 12233.387: 89.2278% ( 47) 00:36:20.629 12233.387 - 12295.802: 89.5503% ( 39) 00:36:20.629 12295.802 - 12358.217: 89.8479% ( 36) 00:36:20.629 12358.217 - 12420.632: 90.1124% ( 32) 00:36:20.629 12420.632 - 12483.048: 90.3770% ( 32) 00:36:20.629 12483.048 - 12545.463: 90.6415% ( 32) 00:36:20.629 12545.463 - 12607.878: 90.8896% ( 30) 00:36:20.629 12607.878 - 12670.293: 91.1293% ( 29) 00:36:20.629 12670.293 - 12732.709: 91.3525% ( 27) 00:36:20.629 12732.709 - 12795.124: 91.6253% ( 33) 00:36:20.629 12795.124 - 12857.539: 91.8651% ( 29) 00:36:20.629 12857.539 - 12919.954: 92.1131% ( 30) 00:36:20.629 12919.954 - 12982.370: 92.3446% ( 28) 00:36:20.629 12982.370 - 13044.785: 92.5926% ( 30) 00:36:20.629 13044.785 - 13107.200: 92.8489% ( 31) 00:36:20.629 13107.200 - 13169.615: 93.0886% ( 29) 00:36:20.629 13169.615 - 13232.030: 93.3284% ( 29) 00:36:20.629 13232.030 - 13294.446: 93.5268% ( 24) 00:36:20.629 13294.446 - 13356.861: 93.7004% ( 21) 00:36:20.629 13356.861 - 13419.276: 93.8823% ( 22) 00:36:20.629 13419.276 - 13481.691: 94.0228% ( 17) 00:36:20.629 13481.691 - 13544.107: 94.1386% ( 14) 00:36:20.629 13544.107 - 13606.522: 94.2378% ( 12) 00:36:20.629 13606.522 - 13668.937: 94.3370% ( 12) 00:36:20.629 13668.937 - 13731.352: 94.4444% ( 13) 00:36:20.629 13731.352 - 13793.768: 94.6098% ( 20) 00:36:20.629 13793.768 - 13856.183: 94.7421% ( 16) 00:36:20.629 13856.183 - 13918.598: 94.8991% ( 19) 00:36:20.629 13918.598 - 13981.013: 95.0149% ( 14) 00:36:20.629 13981.013 - 14043.429: 95.1224% ( 13) 00:36:20.629 14043.429 - 14105.844: 95.1802% ( 7) 00:36:20.629 14105.844 - 14168.259: 95.2629% ( 10) 00:36:20.629 14168.259 - 14230.674: 95.3456% ( 10) 00:36:20.629 14230.674 - 14293.090: 95.4282% ( 10) 00:36:20.629 14293.090 - 14355.505: 95.5192% ( 11) 00:36:20.629 14355.505 - 14417.920: 95.6267% ( 13) 00:36:20.629 14417.920 - 14480.335: 95.7259% ( 12) 00:36:20.629 14480.335 - 14542.750: 95.8251% ( 12) 00:36:20.629 14542.750 - 14605.166: 95.9325% ( 13) 00:36:20.629 14605.166 - 14667.581: 96.0483% ( 14) 00:36:20.629 14667.581 - 14729.996: 96.1640% ( 14) 00:36:20.629 14729.996 - 14792.411: 96.2963% ( 16) 00:36:20.629 14792.411 - 14854.827: 96.4368% ( 17) 00:36:20.629 14854.827 - 14917.242: 96.5608% ( 15) 00:36:20.629 14917.242 - 14979.657: 96.7097% ( 18) 00:36:20.629 14979.657 - 15042.072: 96.8337% ( 15) 00:36:20.629 15042.072 - 15104.488: 96.9659% ( 16) 00:36:20.629 15104.488 - 15166.903: 97.0899% ( 15) 00:36:20.629 15166.903 - 15229.318: 97.1892% ( 12) 00:36:20.629 15229.318 - 15291.733: 97.3049% ( 14) 00:36:20.629 15291.733 - 15354.149: 97.4041% ( 12) 00:36:20.629 15354.149 - 15416.564: 97.5116% ( 13) 00:36:20.629 15416.564 - 15478.979: 97.6108% ( 12) 00:36:20.629 15478.979 - 15541.394: 97.7183% ( 13) 00:36:20.629 15541.394 - 15603.810: 97.8257% ( 13) 00:36:20.629 15603.810 - 15666.225: 97.9332% ( 13) 00:36:20.629 15666.225 - 15728.640: 98.0324% ( 12) 00:36:20.629 15728.640 - 15791.055: 98.1399% ( 13) 00:36:20.629 15791.055 - 15853.470: 98.2143% ( 9) 00:36:20.629 15853.470 - 15915.886: 98.2804% ( 8) 00:36:20.629 15915.886 - 15978.301: 98.3548% ( 9) 00:36:20.629 15978.301 - 16103.131: 98.4706% ( 14) 00:36:20.629 16103.131 - 16227.962: 98.5698% ( 12) 00:36:20.629 16227.962 - 16352.792: 98.6772% ( 13) 00:36:20.629 16352.792 - 16477.623: 98.7517% ( 9) 00:36:20.629 16477.623 - 16602.453: 98.8013% ( 6) 00:36:20.629 16602.453 - 16727.284: 98.8509% ( 6) 00:36:20.629 16727.284 - 16852.114: 98.9005% ( 6) 00:36:20.629 16852.114 - 16976.945: 98.9418% ( 5) 00:36:20.629 24591.604 - 24716.434: 98.9749% ( 4) 00:36:20.629 24716.434 - 24841.265: 98.9997% ( 3) 00:36:20.629 24841.265 - 24966.095: 99.0245% ( 3) 00:36:20.629 24966.095 - 25090.926: 99.0575% ( 4) 00:36:20.629 25090.926 - 25215.756: 99.0741% ( 2) 00:36:20.629 25215.756 - 25340.587: 99.1071% ( 4) 00:36:20.629 25340.587 - 25465.417: 99.1319% ( 3) 00:36:20.629 25465.417 - 25590.248: 99.1650% ( 4) 00:36:20.629 25590.248 - 25715.078: 99.1898% ( 3) 00:36:20.629 25715.078 - 25839.909: 99.2063% ( 2) 00:36:20.629 25839.909 - 25964.739: 99.2394% ( 4) 00:36:20.629 25964.739 - 26089.570: 99.2642% ( 3) 00:36:20.629 26089.570 - 26214.400: 99.2808% ( 2) 00:36:20.629 26214.400 - 26339.230: 99.3056% ( 3) 00:36:20.629 26339.230 - 26464.061: 99.3386% ( 4) 00:36:20.629 26464.061 - 26588.891: 99.3634% ( 3) 00:36:20.629 26588.891 - 26713.722: 99.3965% ( 4) 00:36:20.629 26713.722 - 26838.552: 99.4296% ( 4) 00:36:20.629 26838.552 - 26963.383: 99.4544% ( 3) 00:36:20.629 26963.383 - 27088.213: 99.4709% ( 2) 00:36:20.629 31706.941 - 31831.771: 99.4957% ( 3) 00:36:20.629 31831.771 - 31956.602: 99.5288% ( 4) 00:36:20.629 31956.602 - 32206.263: 99.5784% ( 6) 00:36:20.629 32206.263 - 32455.924: 99.6362% ( 7) 00:36:20.629 32455.924 - 32705.585: 99.7024% ( 8) 00:36:20.629 32705.585 - 32955.246: 99.7272% ( 3) 00:36:20.629 33204.907 - 33454.568: 99.7768% ( 6) 00:36:20.629 33454.568 - 33704.229: 99.8429% ( 8) 00:36:20.629 33704.229 - 33953.890: 99.9008% ( 7) 00:36:20.629 33953.890 - 34203.550: 99.9587% ( 7) 00:36:20.629 34203.550 - 34453.211: 100.0000% ( 5) 00:36:20.629 00:36:20.629 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:36:20.629 ============================================================================== 00:36:20.629 Range in us Cumulative IO count 00:36:20.629 7926.735 - 7957.943: 0.0083% ( 1) 00:36:20.629 7957.943 - 7989.150: 0.0248% ( 2) 00:36:20.629 7989.150 - 8051.566: 0.0909% ( 8) 00:36:20.629 8051.566 - 8113.981: 0.1736% ( 10) 00:36:20.629 8113.981 - 8176.396: 0.2976% ( 15) 00:36:20.629 8176.396 - 8238.811: 0.4878% ( 23) 00:36:20.629 8238.811 - 8301.227: 0.7358% ( 30) 00:36:20.629 8301.227 - 8363.642: 1.2153% ( 58) 00:36:20.629 8363.642 - 8426.057: 1.8271% ( 74) 00:36:20.629 8426.057 - 8488.472: 2.5794% ( 91) 00:36:20.629 8488.472 - 8550.888: 3.5384% ( 116) 00:36:20.629 8550.888 - 8613.303: 4.6296% ( 132) 00:36:20.629 8613.303 - 8675.718: 6.1095% ( 179) 00:36:20.629 8675.718 - 8738.133: 7.6141% ( 182) 00:36:20.629 8738.133 - 8800.549: 9.5734% ( 237) 00:36:20.629 8800.549 - 8862.964: 11.8469% ( 275) 00:36:20.629 8862.964 - 8925.379: 14.2444% ( 290) 00:36:20.629 8925.379 - 8987.794: 16.8651% ( 317) 00:36:20.629 8987.794 - 9050.210: 19.5271% ( 322) 00:36:20.629 9050.210 - 9112.625: 22.1644% ( 319) 00:36:20.629 9112.625 - 9175.040: 24.8181% ( 321) 00:36:20.629 9175.040 - 9237.455: 27.5876% ( 335) 00:36:20.629 9237.455 - 9299.870: 30.3654% ( 336) 00:36:20.629 9299.870 - 9362.286: 33.0440% ( 324) 00:36:20.629 9362.286 - 9424.701: 35.6729% ( 318) 00:36:20.629 9424.701 - 9487.116: 38.2110% ( 307) 00:36:20.629 9487.116 - 9549.531: 40.6002% ( 289) 00:36:20.629 9549.531 - 9611.947: 42.7166% ( 256) 00:36:20.629 9611.947 - 9674.362: 44.6677% ( 236) 00:36:20.629 9674.362 - 9736.777: 46.4038% ( 210) 00:36:20.629 9736.777 - 9799.192: 47.9084% ( 182) 00:36:20.629 9799.192 - 9861.608: 49.2063% ( 157) 00:36:20.629 9861.608 - 9924.023: 50.3224% ( 135) 00:36:20.629 9924.023 - 9986.438: 51.2979% ( 118) 00:36:20.629 9986.438 - 10048.853: 52.3313% ( 125) 00:36:20.629 10048.853 - 10111.269: 53.2986% ( 117) 00:36:20.629 10111.269 - 10173.684: 54.2659% ( 117) 00:36:20.629 10173.684 - 10236.099: 55.1587% ( 108) 00:36:20.629 10236.099 - 10298.514: 56.0103% ( 103) 00:36:20.629 10298.514 - 10360.930: 56.8618% ( 103) 00:36:20.629 10360.930 - 10423.345: 57.8786% ( 123) 00:36:20.629 10423.345 - 10485.760: 58.8294% ( 115) 00:36:20.629 10485.760 - 10548.175: 59.8297% ( 121) 00:36:20.629 10548.175 - 10610.590: 60.8383% ( 122) 00:36:20.629 10610.590 - 10673.006: 61.9378% ( 133) 00:36:20.629 10673.006 - 10735.421: 63.1779% ( 150) 00:36:20.629 10735.421 - 10797.836: 64.5007% ( 160) 00:36:20.629 10797.836 - 10860.251: 65.9144% ( 171) 00:36:20.629 10860.251 - 10922.667: 67.3446% ( 173) 00:36:20.629 10922.667 - 10985.082: 68.8823% ( 186) 00:36:20.629 10985.082 - 11047.497: 70.3786% ( 181) 00:36:20.629 11047.497 - 11109.912: 71.9246% ( 187) 00:36:20.629 11109.912 - 11172.328: 73.4706% ( 187) 00:36:20.629 11172.328 - 11234.743: 74.9669% ( 181) 00:36:20.629 11234.743 - 11297.158: 76.3476% ( 167) 00:36:20.629 11297.158 - 11359.573: 77.5711% ( 148) 00:36:20.629 11359.573 - 11421.989: 78.7698% ( 145) 00:36:20.629 11421.989 - 11484.404: 79.9686% ( 145) 00:36:20.629 11484.404 - 11546.819: 81.0516% ( 131) 00:36:20.629 11546.819 - 11609.234: 82.1098% ( 128) 00:36:20.629 11609.234 - 11671.650: 83.0522% ( 114) 00:36:20.629 11671.650 - 11734.065: 83.9782% ( 112) 00:36:20.629 11734.065 - 11796.480: 84.7636% ( 95) 00:36:20.629 11796.480 - 11858.895: 85.3919% ( 76) 00:36:20.629 11858.895 - 11921.310: 86.0202% ( 76) 00:36:20.629 11921.310 - 11983.726: 86.5658% ( 66) 00:36:20.629 11983.726 - 12046.141: 87.0701% ( 61) 00:36:20.629 12046.141 - 12108.556: 87.5661% ( 60) 00:36:20.629 12108.556 - 12170.971: 88.0126% ( 54) 00:36:20.629 12170.971 - 12233.387: 88.3846% ( 45) 00:36:20.629 12233.387 - 12295.802: 88.7566% ( 45) 00:36:20.629 12295.802 - 12358.217: 89.0708% ( 38) 00:36:20.629 12358.217 - 12420.632: 89.3519% ( 34) 00:36:20.629 12420.632 - 12483.048: 89.6412% ( 35) 00:36:20.629 12483.048 - 12545.463: 89.9140% ( 33) 00:36:20.630 12545.463 - 12607.878: 90.1703% ( 31) 00:36:20.630 12607.878 - 12670.293: 90.4183% ( 30) 00:36:20.630 12670.293 - 12732.709: 90.7490% ( 40) 00:36:20.630 12732.709 - 12795.124: 91.1045% ( 43) 00:36:20.630 12795.124 - 12857.539: 91.4435% ( 41) 00:36:20.630 12857.539 - 12919.954: 91.7741% ( 40) 00:36:20.630 12919.954 - 12982.370: 92.1048% ( 40) 00:36:20.630 12982.370 - 13044.785: 92.3859% ( 34) 00:36:20.630 13044.785 - 13107.200: 92.6422% ( 31) 00:36:20.630 13107.200 - 13169.615: 92.9315% ( 35) 00:36:20.630 13169.615 - 13232.030: 93.1878% ( 31) 00:36:20.630 13232.030 - 13294.446: 93.4028% ( 26) 00:36:20.630 13294.446 - 13356.861: 93.6177% ( 26) 00:36:20.630 13356.861 - 13419.276: 93.7913% ( 21) 00:36:20.630 13419.276 - 13481.691: 93.9649% ( 21) 00:36:20.630 13481.691 - 13544.107: 94.1386% ( 21) 00:36:20.630 13544.107 - 13606.522: 94.3039% ( 20) 00:36:20.630 13606.522 - 13668.937: 94.4775% ( 21) 00:36:20.630 13668.937 - 13731.352: 94.6429% ( 20) 00:36:20.630 13731.352 - 13793.768: 94.7834% ( 17) 00:36:20.630 13793.768 - 13856.183: 94.9322% ( 18) 00:36:20.630 13856.183 - 13918.598: 95.0645% ( 16) 00:36:20.630 13918.598 - 13981.013: 95.1141% ( 6) 00:36:20.630 13981.013 - 14043.429: 95.2216% ( 13) 00:36:20.630 14043.429 - 14105.844: 95.3373% ( 14) 00:36:20.630 14105.844 - 14168.259: 95.4448% ( 13) 00:36:20.630 14168.259 - 14230.674: 95.5440% ( 12) 00:36:20.630 14230.674 - 14293.090: 95.6019% ( 7) 00:36:20.630 14293.090 - 14355.505: 95.6680% ( 8) 00:36:20.630 14355.505 - 14417.920: 95.7507% ( 10) 00:36:20.630 14417.920 - 14480.335: 95.8251% ( 9) 00:36:20.630 14480.335 - 14542.750: 95.9325% ( 13) 00:36:20.630 14542.750 - 14605.166: 96.0235% ( 11) 00:36:20.630 14605.166 - 14667.581: 96.1144% ( 11) 00:36:20.630 14667.581 - 14729.996: 96.2219% ( 13) 00:36:20.630 14729.996 - 14792.411: 96.3542% ( 16) 00:36:20.630 14792.411 - 14854.827: 96.5360% ( 22) 00:36:20.630 14854.827 - 14917.242: 96.6931% ( 19) 00:36:20.630 14917.242 - 14979.657: 96.8419% ( 18) 00:36:20.630 14979.657 - 15042.072: 96.9659% ( 15) 00:36:20.630 15042.072 - 15104.488: 97.0982% ( 16) 00:36:20.630 15104.488 - 15166.903: 97.2057% ( 13) 00:36:20.630 15166.903 - 15229.318: 97.2966% ( 11) 00:36:20.630 15229.318 - 15291.733: 97.3958% ( 12) 00:36:20.630 15291.733 - 15354.149: 97.5033% ( 13) 00:36:20.630 15354.149 - 15416.564: 97.6025% ( 12) 00:36:20.630 15416.564 - 15478.979: 97.7100% ( 13) 00:36:20.630 15478.979 - 15541.394: 97.7844% ( 9) 00:36:20.630 15541.394 - 15603.810: 97.8588% ( 9) 00:36:20.630 15603.810 - 15666.225: 97.9332% ( 9) 00:36:20.630 15666.225 - 15728.640: 98.0159% ( 10) 00:36:20.630 15728.640 - 15791.055: 98.0985% ( 10) 00:36:20.630 15791.055 - 15853.470: 98.1895% ( 11) 00:36:20.630 15853.470 - 15915.886: 98.2722% ( 10) 00:36:20.630 15915.886 - 15978.301: 98.3300% ( 7) 00:36:20.630 15978.301 - 16103.131: 98.4706% ( 17) 00:36:20.630 16103.131 - 16227.962: 98.5780% ( 13) 00:36:20.630 16227.962 - 16352.792: 98.6607% ( 10) 00:36:20.630 16352.792 - 16477.623: 98.7351% ( 9) 00:36:20.630 16477.623 - 16602.453: 98.7930% ( 7) 00:36:20.630 16602.453 - 16727.284: 98.8426% ( 6) 00:36:20.630 16727.284 - 16852.114: 98.8922% ( 6) 00:36:20.630 16852.114 - 16976.945: 98.9418% ( 6) 00:36:20.630 21845.333 - 21970.164: 98.9831% ( 5) 00:36:20.630 21970.164 - 22094.994: 99.0162% ( 4) 00:36:20.630 22094.994 - 22219.825: 99.0410% ( 3) 00:36:20.630 22219.825 - 22344.655: 99.0741% ( 4) 00:36:20.630 22344.655 - 22469.486: 99.0989% ( 3) 00:36:20.630 22469.486 - 22594.316: 99.1237% ( 3) 00:36:20.630 22594.316 - 22719.147: 99.1567% ( 4) 00:36:20.630 22719.147 - 22843.977: 99.1815% ( 3) 00:36:20.630 22843.977 - 22968.808: 99.1981% ( 2) 00:36:20.630 22968.808 - 23093.638: 99.2312% ( 4) 00:36:20.630 23093.638 - 23218.469: 99.2560% ( 3) 00:36:20.630 23218.469 - 23343.299: 99.2890% ( 4) 00:36:20.630 23343.299 - 23468.130: 99.3138% ( 3) 00:36:20.630 23468.130 - 23592.960: 99.3469% ( 4) 00:36:20.630 23592.960 - 23717.790: 99.3717% ( 3) 00:36:20.630 23717.790 - 23842.621: 99.3965% ( 3) 00:36:20.630 23842.621 - 23967.451: 99.4296% ( 4) 00:36:20.630 23967.451 - 24092.282: 99.4544% ( 3) 00:36:20.630 24092.282 - 24217.112: 99.4709% ( 2) 00:36:20.630 28711.010 - 28835.840: 99.4792% ( 1) 00:36:20.630 28835.840 - 28960.670: 99.5122% ( 4) 00:36:20.630 28960.670 - 29085.501: 99.5370% ( 3) 00:36:20.630 29085.501 - 29210.331: 99.5618% ( 3) 00:36:20.630 29210.331 - 29335.162: 99.5866% ( 3) 00:36:20.630 29335.162 - 29459.992: 99.6114% ( 3) 00:36:20.630 29459.992 - 29584.823: 99.6445% ( 4) 00:36:20.630 29584.823 - 29709.653: 99.6693% ( 3) 00:36:20.630 29709.653 - 29834.484: 99.6941% ( 3) 00:36:20.630 29834.484 - 29959.314: 99.7272% ( 4) 00:36:20.630 29959.314 - 30084.145: 99.7437% ( 2) 00:36:20.630 30084.145 - 30208.975: 99.7685% ( 3) 00:36:20.630 30208.975 - 30333.806: 99.8016% ( 4) 00:36:20.630 30333.806 - 30458.636: 99.8264% ( 3) 00:36:20.630 30458.636 - 30583.467: 99.8512% ( 3) 00:36:20.630 30583.467 - 30708.297: 99.8843% ( 4) 00:36:20.630 30708.297 - 30833.128: 99.9091% ( 3) 00:36:20.630 30833.128 - 30957.958: 99.9421% ( 4) 00:36:20.630 30957.958 - 31082.789: 99.9669% ( 3) 00:36:20.630 31082.789 - 31207.619: 100.0000% ( 4) 00:36:20.630 00:36:20.630 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:36:20.630 ============================================================================== 00:36:20.630 Range in us Cumulative IO count 00:36:20.630 7926.735 - 7957.943: 0.0165% ( 2) 00:36:20.630 7957.943 - 7989.150: 0.0248% ( 1) 00:36:20.630 7989.150 - 8051.566: 0.0909% ( 8) 00:36:20.630 8051.566 - 8113.981: 0.1901% ( 12) 00:36:20.630 8113.981 - 8176.396: 0.3720% ( 22) 00:36:20.630 8176.396 - 8238.811: 0.5870% ( 26) 00:36:20.630 8238.811 - 8301.227: 0.8350% ( 30) 00:36:20.630 8301.227 - 8363.642: 1.2235% ( 47) 00:36:20.630 8363.642 - 8426.057: 1.7609% ( 65) 00:36:20.630 8426.057 - 8488.472: 2.4471% ( 83) 00:36:20.630 8488.472 - 8550.888: 3.3234% ( 106) 00:36:20.630 8550.888 - 8613.303: 4.4478% ( 136) 00:36:20.630 8613.303 - 8675.718: 5.7788% ( 161) 00:36:20.630 8675.718 - 8738.133: 7.4570% ( 203) 00:36:20.630 8738.133 - 8800.549: 9.4990% ( 247) 00:36:20.630 8800.549 - 8862.964: 11.7312% ( 270) 00:36:20.630 8862.964 - 8925.379: 14.3271% ( 314) 00:36:20.630 8925.379 - 8987.794: 16.8981% ( 311) 00:36:20.630 8987.794 - 9050.210: 19.6098% ( 328) 00:36:20.630 9050.210 - 9112.625: 22.2222% ( 316) 00:36:20.630 9112.625 - 9175.040: 25.0331% ( 340) 00:36:20.630 9175.040 - 9237.455: 27.7447% ( 328) 00:36:20.630 9237.455 - 9299.870: 30.5308% ( 337) 00:36:20.630 9299.870 - 9362.286: 33.3003% ( 335) 00:36:20.630 9362.286 - 9424.701: 35.9210% ( 317) 00:36:20.630 9424.701 - 9487.116: 38.3433% ( 293) 00:36:20.630 9487.116 - 9549.531: 40.6581% ( 280) 00:36:20.630 9549.531 - 9611.947: 42.7910% ( 258) 00:36:20.630 9611.947 - 9674.362: 44.6594% ( 226) 00:36:20.630 9674.362 - 9736.777: 46.2632% ( 194) 00:36:20.630 9736.777 - 9799.192: 47.6769% ( 171) 00:36:20.630 9799.192 - 9861.608: 48.9335% ( 152) 00:36:20.630 9861.608 - 9924.023: 50.1075% ( 142) 00:36:20.630 9924.023 - 9986.438: 51.1078% ( 121) 00:36:20.630 9986.438 - 10048.853: 52.0585% ( 115) 00:36:20.630 10048.853 - 10111.269: 53.0341% ( 118) 00:36:20.630 10111.269 - 10173.684: 53.9765% ( 114) 00:36:20.630 10173.684 - 10236.099: 54.8280% ( 103) 00:36:20.630 10236.099 - 10298.514: 55.6878% ( 104) 00:36:20.630 10298.514 - 10360.930: 56.6138% ( 112) 00:36:20.630 10360.930 - 10423.345: 57.6058% ( 120) 00:36:20.630 10423.345 - 10485.760: 58.5896% ( 119) 00:36:20.630 10485.760 - 10548.175: 59.5569% ( 117) 00:36:20.630 10548.175 - 10610.590: 60.7060% ( 139) 00:36:20.630 10610.590 - 10673.006: 61.9544% ( 151) 00:36:20.630 10673.006 - 10735.421: 63.2688% ( 159) 00:36:20.630 10735.421 - 10797.836: 64.5916% ( 160) 00:36:20.630 10797.836 - 10860.251: 66.0136% ( 172) 00:36:20.630 10860.251 - 10922.667: 67.4686% ( 176) 00:36:20.631 10922.667 - 10985.082: 69.0146% ( 187) 00:36:20.631 10985.082 - 11047.497: 70.5440% ( 185) 00:36:20.631 11047.497 - 11109.912: 71.9825% ( 174) 00:36:20.631 11109.912 - 11172.328: 73.5036% ( 184) 00:36:20.631 11172.328 - 11234.743: 74.9504% ( 175) 00:36:20.631 11234.743 - 11297.158: 76.3889% ( 174) 00:36:20.631 11297.158 - 11359.573: 77.6703% ( 155) 00:36:20.631 11359.573 - 11421.989: 78.8690% ( 145) 00:36:20.631 11421.989 - 11484.404: 79.9438% ( 130) 00:36:20.631 11484.404 - 11546.819: 81.0351% ( 132) 00:36:20.631 11546.819 - 11609.234: 81.9775% ( 114) 00:36:20.631 11609.234 - 11671.650: 82.8704% ( 108) 00:36:20.631 11671.650 - 11734.065: 83.6723% ( 97) 00:36:20.631 11734.065 - 11796.480: 84.3915% ( 87) 00:36:20.631 11796.480 - 11858.895: 85.0777% ( 83) 00:36:20.631 11858.895 - 11921.310: 85.6481% ( 69) 00:36:20.631 11921.310 - 11983.726: 86.1855% ( 65) 00:36:20.631 11983.726 - 12046.141: 86.6319% ( 54) 00:36:20.631 12046.141 - 12108.556: 87.0536% ( 51) 00:36:20.631 12108.556 - 12170.971: 87.5000% ( 54) 00:36:20.631 12170.971 - 12233.387: 87.9464% ( 54) 00:36:20.631 12233.387 - 12295.802: 88.3681% ( 51) 00:36:20.631 12295.802 - 12358.217: 88.7401% ( 45) 00:36:20.631 12358.217 - 12420.632: 89.1286% ( 47) 00:36:20.631 12420.632 - 12483.048: 89.5007% ( 45) 00:36:20.631 12483.048 - 12545.463: 89.7983% ( 36) 00:36:20.631 12545.463 - 12607.878: 90.0876% ( 35) 00:36:20.631 12607.878 - 12670.293: 90.4266% ( 41) 00:36:20.631 12670.293 - 12732.709: 90.7242% ( 36) 00:36:20.631 12732.709 - 12795.124: 91.0384% ( 38) 00:36:20.631 12795.124 - 12857.539: 91.3690% ( 40) 00:36:20.631 12857.539 - 12919.954: 91.6749% ( 37) 00:36:20.631 12919.954 - 12982.370: 91.9478% ( 33) 00:36:20.631 12982.370 - 13044.785: 92.1958% ( 30) 00:36:20.631 13044.785 - 13107.200: 92.3859% ( 23) 00:36:20.631 13107.200 - 13169.615: 92.6505% ( 32) 00:36:20.631 13169.615 - 13232.030: 92.8654% ( 26) 00:36:20.631 13232.030 - 13294.446: 93.1134% ( 30) 00:36:20.631 13294.446 - 13356.861: 93.3366% ( 27) 00:36:20.631 13356.861 - 13419.276: 93.5516% ( 26) 00:36:20.631 13419.276 - 13481.691: 93.7748% ( 27) 00:36:20.631 13481.691 - 13544.107: 93.9815% ( 25) 00:36:20.631 13544.107 - 13606.522: 94.1964% ( 26) 00:36:20.631 13606.522 - 13668.937: 94.4114% ( 26) 00:36:20.631 13668.937 - 13731.352: 94.6098% ( 24) 00:36:20.631 13731.352 - 13793.768: 94.8165% ( 25) 00:36:20.631 13793.768 - 13856.183: 94.9653% ( 18) 00:36:20.631 13856.183 - 13918.598: 95.0810% ( 14) 00:36:20.631 13918.598 - 13981.013: 95.2133% ( 16) 00:36:20.631 13981.013 - 14043.429: 95.3786% ( 20) 00:36:20.631 14043.429 - 14105.844: 95.5274% ( 18) 00:36:20.631 14105.844 - 14168.259: 95.6597% ( 16) 00:36:20.631 14168.259 - 14230.674: 95.7755% ( 14) 00:36:20.631 14230.674 - 14293.090: 95.9160% ( 17) 00:36:20.631 14293.090 - 14355.505: 96.0235% ( 13) 00:36:20.631 14355.505 - 14417.920: 96.1227% ( 12) 00:36:20.631 14417.920 - 14480.335: 96.2302% ( 13) 00:36:20.631 14480.335 - 14542.750: 96.3211% ( 11) 00:36:20.631 14542.750 - 14605.166: 96.4038% ( 10) 00:36:20.631 14605.166 - 14667.581: 96.4947% ( 11) 00:36:20.631 14667.581 - 14729.996: 96.5774% ( 10) 00:36:20.631 14729.996 - 14792.411: 96.6931% ( 14) 00:36:20.631 14792.411 - 14854.827: 96.7923% ( 12) 00:36:20.631 14854.827 - 14917.242: 96.8998% ( 13) 00:36:20.631 14917.242 - 14979.657: 96.9825% ( 10) 00:36:20.631 14979.657 - 15042.072: 97.0817% ( 12) 00:36:20.631 15042.072 - 15104.488: 97.1644% ( 10) 00:36:20.631 15104.488 - 15166.903: 97.2305% ( 8) 00:36:20.631 15166.903 - 15229.318: 97.2966% ( 8) 00:36:20.631 15229.318 - 15291.733: 97.3793% ( 10) 00:36:20.631 15291.733 - 15354.149: 97.4620% ( 10) 00:36:20.631 15354.149 - 15416.564: 97.5281% ( 8) 00:36:20.631 15416.564 - 15478.979: 97.6108% ( 10) 00:36:20.631 15478.979 - 15541.394: 97.6935% ( 10) 00:36:20.631 15541.394 - 15603.810: 97.7679% ( 9) 00:36:20.631 15603.810 - 15666.225: 97.8257% ( 7) 00:36:20.631 15666.225 - 15728.640: 97.9167% ( 11) 00:36:20.631 15728.640 - 15791.055: 98.0159% ( 12) 00:36:20.631 15791.055 - 15853.470: 98.0737% ( 7) 00:36:20.631 15853.470 - 15915.886: 98.1399% ( 8) 00:36:20.631 15915.886 - 15978.301: 98.2060% ( 8) 00:36:20.631 15978.301 - 16103.131: 98.3466% ( 17) 00:36:20.631 16103.131 - 16227.962: 98.4375% ( 11) 00:36:20.631 16227.962 - 16352.792: 98.5202% ( 10) 00:36:20.631 16352.792 - 16477.623: 98.6111% ( 11) 00:36:20.631 16477.623 - 16602.453: 98.6938% ( 10) 00:36:20.631 16602.453 - 16727.284: 98.7765% ( 10) 00:36:20.631 16727.284 - 16852.114: 98.8591% ( 10) 00:36:20.631 16852.114 - 16976.945: 98.9335% ( 9) 00:36:20.631 16976.945 - 17101.775: 98.9418% ( 1) 00:36:20.631 18350.080 - 18474.910: 98.9501% ( 1) 00:36:20.631 18474.910 - 18599.741: 98.9666% ( 2) 00:36:20.631 18599.741 - 18724.571: 98.9997% ( 4) 00:36:20.631 18724.571 - 18849.402: 99.0245% ( 3) 00:36:20.631 18849.402 - 18974.232: 99.0493% ( 3) 00:36:20.631 18974.232 - 19099.063: 99.0658% ( 2) 00:36:20.631 19099.063 - 19223.893: 99.0906% ( 3) 00:36:20.631 19223.893 - 19348.724: 99.1154% ( 3) 00:36:20.631 19348.724 - 19473.554: 99.1402% ( 3) 00:36:20.631 19473.554 - 19598.385: 99.1567% ( 2) 00:36:20.631 19598.385 - 19723.215: 99.1815% ( 3) 00:36:20.631 19723.215 - 19848.046: 99.2063% ( 3) 00:36:20.631 19848.046 - 19972.876: 99.2312% ( 3) 00:36:20.631 19972.876 - 20097.707: 99.2560% ( 3) 00:36:20.631 20097.707 - 20222.537: 99.2808% ( 3) 00:36:20.631 20222.537 - 20347.368: 99.3056% ( 3) 00:36:20.631 20347.368 - 20472.198: 99.3304% ( 3) 00:36:20.631 20472.198 - 20597.029: 99.3552% ( 3) 00:36:20.631 20597.029 - 20721.859: 99.3800% ( 3) 00:36:20.631 20721.859 - 20846.690: 99.4048% ( 3) 00:36:20.631 20846.690 - 20971.520: 99.4296% ( 3) 00:36:20.631 20971.520 - 21096.350: 99.4544% ( 3) 00:36:20.631 21096.350 - 21221.181: 99.4709% ( 2) 00:36:20.631 25964.739 - 26089.570: 99.4792% ( 1) 00:36:20.631 26089.570 - 26214.400: 99.5040% ( 3) 00:36:20.631 26214.400 - 26339.230: 99.5288% ( 3) 00:36:20.631 26339.230 - 26464.061: 99.5536% ( 3) 00:36:20.631 26464.061 - 26588.891: 99.5784% ( 3) 00:36:20.631 26588.891 - 26713.722: 99.6032% ( 3) 00:36:20.631 26713.722 - 26838.552: 99.6197% ( 2) 00:36:20.631 26838.552 - 26963.383: 99.6445% ( 3) 00:36:20.631 26963.383 - 27088.213: 99.6776% ( 4) 00:36:20.631 27088.213 - 27213.044: 99.7024% ( 3) 00:36:20.631 27213.044 - 27337.874: 99.7272% ( 3) 00:36:20.631 27337.874 - 27462.705: 99.7603% ( 4) 00:36:20.631 27462.705 - 27587.535: 99.7851% ( 3) 00:36:20.631 27587.535 - 27712.366: 99.8099% ( 3) 00:36:20.631 27712.366 - 27837.196: 99.8429% ( 4) 00:36:20.631 27837.196 - 27962.027: 99.8760% ( 4) 00:36:20.631 27962.027 - 28086.857: 99.8925% ( 2) 00:36:20.631 28086.857 - 28211.688: 99.9256% ( 4) 00:36:20.631 28211.688 - 28336.518: 99.9504% ( 3) 00:36:20.631 28336.518 - 28461.349: 99.9835% ( 4) 00:36:20.631 28461.349 - 28586.179: 100.0000% ( 2) 00:36:20.631 00:36:20.631 09:05:22 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:36:22.008 Initializing NVMe Controllers 00:36:22.008 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:36:22.008 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:36:22.008 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:36:22.008 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:36:22.008 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:36:22.008 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:36:22.008 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:36:22.008 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:36:22.008 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:36:22.008 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:36:22.008 Initialization complete. Launching workers. 00:36:22.008 ======================================================== 00:36:22.008 Latency(us) 00:36:22.008 Device Information : IOPS MiB/s Average min max 00:36:22.008 PCIE (0000:00:10.0) NSID 1 from core 0: 8093.67 94.85 15848.44 10049.62 58190.40 00:36:22.008 PCIE (0000:00:11.0) NSID 1 from core 0: 8093.67 94.85 15791.20 10169.44 54905.37 00:36:22.008 PCIE (0000:00:13.0) NSID 1 from core 0: 8093.67 94.85 15742.92 9726.89 52547.34 00:36:22.008 PCIE (0000:00:12.0) NSID 1 from core 0: 8093.67 94.85 15695.35 10045.24 49891.20 00:36:22.008 PCIE (0000:00:12.0) NSID 2 from core 0: 8093.67 94.85 15647.85 10126.43 47354.06 00:36:22.008 PCIE (0000:00:12.0) NSID 3 from core 0: 8093.67 94.85 15601.24 9996.27 44697.36 00:36:22.008 ======================================================== 00:36:22.008 Total : 48562.05 569.09 15721.17 9726.89 58190.40 00:36:22.008 00:36:22.008 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:36:22.008 ================================================================================= 00:36:22.008 1.00000% : 10485.760us 00:36:22.008 10.00000% : 11172.328us 00:36:22.008 25.00000% : 11734.065us 00:36:22.008 50.00000% : 12795.124us 00:36:22.008 75.00000% : 20846.690us 00:36:22.008 90.00000% : 22594.316us 00:36:22.008 95.00000% : 23343.299us 00:36:22.008 98.00000% : 24341.943us 00:36:22.008 99.00000% : 42442.362us 00:36:22.008 99.50000% : 55674.392us 00:36:22.008 99.90000% : 57671.680us 00:36:22.008 99.99000% : 58420.663us 00:36:22.008 99.99900% : 58420.663us 00:36:22.008 99.99990% : 58420.663us 00:36:22.008 99.99999% : 58420.663us 00:36:22.008 00:36:22.008 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:36:22.008 ================================================================================= 00:36:22.008 1.00000% : 10610.590us 00:36:22.008 10.00000% : 11172.328us 00:36:22.008 25.00000% : 11796.480us 00:36:22.008 50.00000% : 12670.293us 00:36:22.008 75.00000% : 21096.350us 00:36:22.008 90.00000% : 22219.825us 00:36:22.008 95.00000% : 22843.977us 00:36:22.008 98.00000% : 23717.790us 00:36:22.008 99.00000% : 40445.074us 00:36:22.008 99.50000% : 52678.461us 00:36:22.008 99.90000% : 54426.088us 00:36:22.008 99.99000% : 54925.410us 00:36:22.008 99.99900% : 54925.410us 00:36:22.008 99.99990% : 54925.410us 00:36:22.008 99.99999% : 54925.410us 00:36:22.008 00:36:22.008 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:36:22.008 ================================================================================= 00:36:22.008 1.00000% : 10548.175us 00:36:22.008 10.00000% : 11234.743us 00:36:22.008 25.00000% : 11796.480us 00:36:22.008 50.00000% : 12670.293us 00:36:22.008 75.00000% : 21096.350us 00:36:22.008 90.00000% : 22219.825us 00:36:22.008 95.00000% : 22719.147us 00:36:22.008 98.00000% : 23468.130us 00:36:22.008 99.00000% : 38447.787us 00:36:22.008 99.50000% : 50181.851us 00:36:22.008 99.90000% : 52179.139us 00:36:22.008 99.99000% : 52678.461us 00:36:22.008 99.99900% : 52678.461us 00:36:22.008 99.99990% : 52678.461us 00:36:22.008 99.99999% : 52678.461us 00:36:22.008 00:36:22.008 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:36:22.008 ================================================================================= 00:36:22.008 1.00000% : 10610.590us 00:36:22.008 10.00000% : 11234.743us 00:36:22.008 25.00000% : 11796.480us 00:36:22.008 50.00000% : 12670.293us 00:36:22.008 75.00000% : 21096.350us 00:36:22.008 90.00000% : 22219.825us 00:36:22.008 95.00000% : 22843.977us 00:36:22.008 98.00000% : 23592.960us 00:36:22.008 99.00000% : 35951.177us 00:36:22.008 99.50000% : 47685.242us 00:36:22.008 99.90000% : 49682.530us 00:36:22.008 99.99000% : 49932.190us 00:36:22.008 99.99900% : 49932.190us 00:36:22.008 99.99990% : 49932.190us 00:36:22.008 99.99999% : 49932.190us 00:36:22.008 00:36:22.008 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:36:22.008 ================================================================================= 00:36:22.008 1.00000% : 10485.760us 00:36:22.008 10.00000% : 11297.158us 00:36:22.008 25.00000% : 11796.480us 00:36:22.008 50.00000% : 12670.293us 00:36:22.008 75.00000% : 21096.350us 00:36:22.008 90.00000% : 22094.994us 00:36:22.008 95.00000% : 22719.147us 00:36:22.008 98.00000% : 23592.960us 00:36:22.008 99.00000% : 33204.907us 00:36:22.008 99.50000% : 45188.632us 00:36:22.008 99.90000% : 46936.259us 00:36:22.008 99.99000% : 47435.581us 00:36:22.008 99.99900% : 47435.581us 00:36:22.008 99.99990% : 47435.581us 00:36:22.008 99.99999% : 47435.581us 00:36:22.008 00:36:22.008 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:36:22.008 ================================================================================= 00:36:22.008 1.00000% : 10548.175us 00:36:22.008 10.00000% : 11234.743us 00:36:22.008 25.00000% : 11734.065us 00:36:22.008 50.00000% : 12670.293us 00:36:22.008 75.00000% : 21096.350us 00:36:22.008 90.00000% : 22094.994us 00:36:22.008 95.00000% : 22843.977us 00:36:22.008 98.00000% : 23592.960us 00:36:22.008 99.00000% : 30833.128us 00:36:22.008 99.50000% : 42442.362us 00:36:22.008 99.90000% : 44439.650us 00:36:22.008 99.99000% : 44938.971us 00:36:22.008 99.99900% : 44938.971us 00:36:22.008 99.99990% : 44938.971us 00:36:22.008 99.99999% : 44938.971us 00:36:22.008 00:36:22.008 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:36:22.008 ============================================================================== 00:36:22.008 Range in us Cumulative IO count 00:36:22.008 10048.853 - 10111.269: 0.0492% ( 4) 00:36:22.008 10111.269 - 10173.684: 0.1845% ( 11) 00:36:22.008 10173.684 - 10236.099: 0.3814% ( 16) 00:36:22.008 10236.099 - 10298.514: 0.5782% ( 16) 00:36:22.008 10298.514 - 10360.930: 0.7628% ( 15) 00:36:22.008 10360.930 - 10423.345: 0.8735% ( 9) 00:36:22.008 10423.345 - 10485.760: 1.4887% ( 50) 00:36:22.008 10485.760 - 10548.175: 2.0669% ( 47) 00:36:22.008 10548.175 - 10610.590: 2.5714% ( 41) 00:36:22.008 10610.590 - 10673.006: 3.2234% ( 53) 00:36:22.008 10673.006 - 10735.421: 3.7771% ( 45) 00:36:22.008 10735.421 - 10797.836: 4.4783% ( 57) 00:36:22.008 10797.836 - 10860.251: 5.3888% ( 74) 00:36:22.008 10860.251 - 10922.667: 6.1147% ( 59) 00:36:22.008 10922.667 - 10985.082: 7.0374% ( 75) 00:36:22.008 10985.082 - 11047.497: 7.7264% ( 56) 00:36:22.008 11047.497 - 11109.912: 8.4769% ( 61) 00:36:22.008 11109.912 - 11172.328: 10.1624% ( 137) 00:36:22.008 11172.328 - 11234.743: 11.0851% ( 75) 00:36:22.008 11234.743 - 11297.158: 12.8322% ( 142) 00:36:22.008 11297.158 - 11359.573: 14.5423% ( 139) 00:36:22.008 11359.573 - 11421.989: 16.4124% ( 152) 00:36:22.008 11421.989 - 11484.404: 18.3071% ( 154) 00:36:22.008 11484.404 - 11546.819: 20.1033% ( 146) 00:36:22.008 11546.819 - 11609.234: 21.7397% ( 133) 00:36:22.008 11609.234 - 11671.650: 23.5605% ( 148) 00:36:22.008 11671.650 - 11734.065: 25.1599% ( 130) 00:36:22.008 11734.065 - 11796.480: 26.7840% ( 132) 00:36:22.008 11796.480 - 11858.895: 28.2111% ( 116) 00:36:22.008 11858.895 - 11921.310: 29.8967% ( 137) 00:36:22.008 11921.310 - 11983.726: 31.8775% ( 161) 00:36:22.008 11983.726 - 12046.141: 33.6614% ( 145) 00:36:22.009 12046.141 - 12108.556: 35.6791% ( 164) 00:36:22.009 12108.556 - 12170.971: 37.7215% ( 166) 00:36:22.009 12170.971 - 12233.387: 39.3086% ( 129) 00:36:22.009 12233.387 - 12295.802: 40.9326% ( 132) 00:36:22.009 12295.802 - 12358.217: 42.3967% ( 119) 00:36:22.009 12358.217 - 12420.632: 43.9099% ( 123) 00:36:22.009 12420.632 - 12483.048: 45.1895% ( 104) 00:36:22.009 12483.048 - 12545.463: 46.6781% ( 121) 00:36:22.009 12545.463 - 12607.878: 47.8223% ( 93) 00:36:22.009 12607.878 - 12670.293: 48.8804% ( 86) 00:36:22.009 12670.293 - 12732.709: 49.8401% ( 78) 00:36:22.009 12732.709 - 12795.124: 50.7136% ( 71) 00:36:22.009 12795.124 - 12857.539: 51.5748% ( 70) 00:36:22.009 12857.539 - 12919.954: 52.5221% ( 77) 00:36:22.009 12919.954 - 12982.370: 53.4326% ( 74) 00:36:22.009 12982.370 - 13044.785: 54.5522% ( 91) 00:36:22.009 13044.785 - 13107.200: 55.5364% ( 80) 00:36:22.009 13107.200 - 13169.615: 56.6560% ( 91) 00:36:22.009 13169.615 - 13232.030: 57.5295% ( 71) 00:36:22.009 13232.030 - 13294.446: 58.2554% ( 59) 00:36:22.009 13294.446 - 13356.861: 58.8214% ( 46) 00:36:22.009 13356.861 - 13419.276: 59.2889% ( 38) 00:36:22.009 13419.276 - 13481.691: 59.7195% ( 35) 00:36:22.009 13481.691 - 13544.107: 60.1255% ( 33) 00:36:22.009 13544.107 - 13606.522: 60.4454% ( 26) 00:36:22.009 13606.522 - 13668.937: 60.7283% ( 23) 00:36:22.009 13668.937 - 13731.352: 61.0605% ( 27) 00:36:22.009 13731.352 - 13793.768: 61.4296% ( 30) 00:36:22.009 13793.768 - 13856.183: 61.7864% ( 29) 00:36:22.009 13856.183 - 13918.598: 62.0817% ( 24) 00:36:22.009 13918.598 - 13981.013: 62.4262% ( 28) 00:36:22.009 13981.013 - 14043.429: 62.6969% ( 22) 00:36:22.009 14043.429 - 14105.844: 62.8322% ( 11) 00:36:22.009 14105.844 - 14168.259: 62.9183% ( 7) 00:36:22.009 14168.259 - 14230.674: 63.1521% ( 19) 00:36:22.009 14230.674 - 14293.090: 63.2259% ( 6) 00:36:22.009 14293.090 - 14355.505: 63.3120% ( 7) 00:36:22.009 14355.505 - 14417.920: 63.3612% ( 4) 00:36:22.009 14417.920 - 14480.335: 63.3981% ( 3) 00:36:22.009 14480.335 - 14542.750: 63.4719% ( 6) 00:36:22.009 14542.750 - 14605.166: 63.5950% ( 10) 00:36:22.009 14605.166 - 14667.581: 63.6811% ( 7) 00:36:22.009 14667.581 - 14729.996: 63.7918% ( 9) 00:36:22.009 14729.996 - 14792.411: 63.9026% ( 9) 00:36:22.009 14792.411 - 14854.827: 63.9764% ( 6) 00:36:22.009 14854.827 - 14917.242: 64.0502% ( 6) 00:36:22.009 14917.242 - 14979.657: 64.0871% ( 3) 00:36:22.009 14979.657 - 15042.072: 64.1117% ( 2) 00:36:22.009 15042.072 - 15104.488: 64.1363% ( 2) 00:36:22.009 15104.488 - 15166.903: 64.1609% ( 2) 00:36:22.009 15166.903 - 15229.318: 64.2101% ( 4) 00:36:22.009 15229.318 - 15291.733: 64.2224% ( 1) 00:36:22.009 15291.733 - 15354.149: 64.2594% ( 3) 00:36:22.009 15354.149 - 15416.564: 64.2840% ( 2) 00:36:22.009 15416.564 - 15478.979: 64.3209% ( 3) 00:36:22.009 15478.979 - 15541.394: 64.3455% ( 2) 00:36:22.009 15541.394 - 15603.810: 64.3824% ( 3) 00:36:22.009 15603.810 - 15666.225: 64.4193% ( 3) 00:36:22.009 15666.225 - 15728.640: 64.4562% ( 3) 00:36:22.009 15728.640 - 15791.055: 64.4808% ( 2) 00:36:22.009 15791.055 - 15853.470: 64.5177% ( 3) 00:36:22.009 15853.470 - 15915.886: 64.5669% ( 4) 00:36:22.009 15915.886 - 15978.301: 64.6038% ( 3) 00:36:22.009 15978.301 - 16103.131: 64.6407% ( 3) 00:36:22.009 16103.131 - 16227.962: 64.7023% ( 5) 00:36:22.009 16227.962 - 16352.792: 64.7269% ( 2) 00:36:22.009 16352.792 - 16477.623: 64.7638% ( 3) 00:36:22.009 16477.623 - 16602.453: 64.8007% ( 3) 00:36:22.009 16602.453 - 16727.284: 64.8253% ( 2) 00:36:22.009 16727.284 - 16852.114: 64.8499% ( 2) 00:36:22.009 16852.114 - 16976.945: 64.8991% ( 4) 00:36:22.009 16976.945 - 17101.775: 64.9360% ( 3) 00:36:22.009 17101.775 - 17226.606: 64.9606% ( 2) 00:36:22.009 17226.606 - 17351.436: 65.0098% ( 4) 00:36:22.009 17351.436 - 17476.267: 65.0344% ( 2) 00:36:22.009 17476.267 - 17601.097: 65.0714% ( 3) 00:36:22.009 17601.097 - 17725.928: 65.1083% ( 3) 00:36:22.009 17725.928 - 17850.758: 65.1452% ( 3) 00:36:22.009 17850.758 - 17975.589: 65.1944% ( 4) 00:36:22.009 17975.589 - 18100.419: 65.2190% ( 2) 00:36:22.009 18100.419 - 18225.250: 65.2559% ( 3) 00:36:22.009 18225.250 - 18350.080: 65.3051% ( 4) 00:36:22.009 18350.080 - 18474.910: 65.3420% ( 3) 00:36:22.009 18474.910 - 18599.741: 65.3543% ( 1) 00:36:22.009 18974.232 - 19099.063: 65.3912% ( 3) 00:36:22.009 19099.063 - 19223.893: 65.4405% ( 4) 00:36:22.009 19223.893 - 19348.724: 65.5020% ( 5) 00:36:22.009 19348.724 - 19473.554: 65.7111% ( 17) 00:36:22.009 19473.554 - 19598.385: 66.3878% ( 55) 00:36:22.009 19598.385 - 19723.215: 67.3228% ( 76) 00:36:22.009 19723.215 - 19848.046: 67.7534% ( 35) 00:36:22.009 19848.046 - 19972.876: 68.1102% ( 29) 00:36:22.009 19972.876 - 20097.707: 68.7008% ( 48) 00:36:22.009 20097.707 - 20222.537: 69.7096% ( 82) 00:36:22.009 20222.537 - 20347.368: 70.7185% ( 82) 00:36:22.009 20347.368 - 20472.198: 71.9488% ( 100) 00:36:22.009 20472.198 - 20597.029: 73.0192% ( 87) 00:36:22.009 20597.029 - 20721.859: 74.0157% ( 81) 00:36:22.009 20721.859 - 20846.690: 75.0123% ( 81) 00:36:22.009 20846.690 - 20971.520: 76.1811% ( 95) 00:36:22.009 20971.520 - 21096.350: 77.8912% ( 139) 00:36:22.009 21096.350 - 21221.181: 79.2077% ( 107) 00:36:22.009 21221.181 - 21346.011: 80.5487% ( 109) 00:36:22.009 21346.011 - 21470.842: 81.7175% ( 95) 00:36:22.009 21470.842 - 21595.672: 82.8002% ( 88) 00:36:22.009 21595.672 - 21720.503: 83.8091% ( 82) 00:36:22.009 21720.503 - 21845.333: 84.9532% ( 93) 00:36:22.009 21845.333 - 21970.164: 85.9129% ( 78) 00:36:22.009 21970.164 - 22094.994: 86.9833% ( 87) 00:36:22.009 22094.994 - 22219.825: 88.0659% ( 88) 00:36:22.009 22219.825 - 22344.655: 89.0256% ( 78) 00:36:22.009 22344.655 - 22469.486: 89.9360% ( 74) 00:36:22.009 22469.486 - 22594.316: 90.9326% ( 81) 00:36:22.009 22594.316 - 22719.147: 91.7692% ( 68) 00:36:22.009 22719.147 - 22843.977: 92.6304% ( 70) 00:36:22.009 22843.977 - 22968.808: 93.4670% ( 68) 00:36:22.009 22968.808 - 23093.638: 94.1929% ( 59) 00:36:22.009 23093.638 - 23218.469: 94.9065% ( 58) 00:36:22.009 23218.469 - 23343.299: 95.5709% ( 54) 00:36:22.009 23343.299 - 23468.130: 96.0876% ( 42) 00:36:22.009 23468.130 - 23592.960: 96.6535% ( 46) 00:36:22.009 23592.960 - 23717.790: 96.9980% ( 28) 00:36:22.009 23717.790 - 23842.621: 97.4040% ( 33) 00:36:22.009 23842.621 - 23967.451: 97.6255% ( 18) 00:36:22.009 23967.451 - 24092.282: 97.8593% ( 19) 00:36:22.009 24092.282 - 24217.112: 97.9700% ( 9) 00:36:22.009 24217.112 - 24341.943: 98.0192% ( 4) 00:36:22.009 24341.943 - 24466.773: 98.0561% ( 3) 00:36:22.009 24466.773 - 24591.604: 98.0930% ( 3) 00:36:22.009 24591.604 - 24716.434: 98.1422% ( 4) 00:36:22.009 24716.434 - 24841.265: 98.2160% ( 6) 00:36:22.009 24841.265 - 24966.095: 98.3022% ( 7) 00:36:22.009 24966.095 - 25090.926: 98.3760% ( 6) 00:36:22.009 25090.926 - 25215.756: 98.4006% ( 2) 00:36:22.009 25215.756 - 25340.587: 98.4252% ( 2) 00:36:22.009 38447.787 - 38697.448: 98.4621% ( 3) 00:36:22.009 38697.448 - 38947.109: 98.4990% ( 3) 00:36:22.009 38947.109 - 39196.770: 98.5236% ( 2) 00:36:22.009 39696.091 - 39945.752: 98.5605% ( 3) 00:36:22.009 39945.752 - 40195.413: 98.6097% ( 4) 00:36:22.009 40195.413 - 40445.074: 98.6467% ( 3) 00:36:22.009 40445.074 - 40694.735: 98.6959% ( 4) 00:36:22.009 40694.735 - 40944.396: 98.7451% ( 4) 00:36:22.009 40944.396 - 41194.057: 98.7943% ( 4) 00:36:22.009 41194.057 - 41443.718: 98.8312% ( 3) 00:36:22.009 41443.718 - 41693.379: 98.8804% ( 4) 00:36:22.009 41693.379 - 41943.040: 98.9296% ( 4) 00:36:22.009 41943.040 - 42192.701: 98.9788% ( 4) 00:36:22.009 42192.701 - 42442.362: 99.0157% ( 3) 00:36:22.009 42442.362 - 42692.023: 99.0650% ( 4) 00:36:22.009 42692.023 - 42941.684: 99.1265% ( 5) 00:36:22.009 42941.684 - 43191.345: 99.1511% ( 2) 00:36:22.009 43191.345 - 43441.006: 99.2126% ( 5) 00:36:22.009 53926.766 - 54176.427: 99.2372% ( 2) 00:36:22.009 54176.427 - 54426.088: 99.2864% ( 4) 00:36:22.009 54426.088 - 54675.749: 99.3233% ( 3) 00:36:22.009 54675.749 - 54925.410: 99.3848% ( 5) 00:36:22.009 54925.410 - 55175.070: 99.4341% ( 4) 00:36:22.009 55175.070 - 55424.731: 99.4833% ( 4) 00:36:22.009 55424.731 - 55674.392: 99.5325% ( 4) 00:36:22.009 55674.392 - 55924.053: 99.5817% ( 4) 00:36:22.009 55924.053 - 56173.714: 99.6186% ( 3) 00:36:22.009 56173.714 - 56423.375: 99.6555% ( 3) 00:36:22.009 56423.375 - 56673.036: 99.7170% ( 5) 00:36:22.009 56673.036 - 56922.697: 99.7662% ( 4) 00:36:22.009 56922.697 - 57172.358: 99.8155% ( 4) 00:36:22.009 57172.358 - 57422.019: 99.8647% ( 4) 00:36:22.009 57422.019 - 57671.680: 99.9016% ( 3) 00:36:22.009 57671.680 - 57921.341: 99.9508% ( 4) 00:36:22.009 57921.341 - 58171.002: 99.9877% ( 3) 00:36:22.009 58171.002 - 58420.663: 100.0000% ( 1) 00:36:22.009 00:36:22.009 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:36:22.009 ============================================================================== 00:36:22.009 Range in us Cumulative IO count 00:36:22.009 10111.269 - 10173.684: 0.0123% ( 1) 00:36:22.009 10173.684 - 10236.099: 0.0984% ( 7) 00:36:22.009 10236.099 - 10298.514: 0.1845% ( 7) 00:36:22.009 10298.514 - 10360.930: 0.3199% ( 11) 00:36:22.009 10360.930 - 10423.345: 0.4552% ( 11) 00:36:22.009 10423.345 - 10485.760: 0.5044% ( 4) 00:36:22.009 10485.760 - 10548.175: 0.7382% ( 19) 00:36:22.009 10548.175 - 10610.590: 1.0212% ( 23) 00:36:22.009 10610.590 - 10673.006: 1.3780% ( 29) 00:36:22.009 10673.006 - 10735.421: 2.0546% ( 55) 00:36:22.009 10735.421 - 10797.836: 2.9158% ( 70) 00:36:22.009 10797.836 - 10860.251: 3.8140% ( 73) 00:36:22.010 10860.251 - 10922.667: 5.0074% ( 97) 00:36:22.010 10922.667 - 10985.082: 6.1516% ( 93) 00:36:22.010 10985.082 - 11047.497: 7.4065% ( 102) 00:36:22.010 11047.497 - 11109.912: 8.8706% ( 119) 00:36:22.010 11109.912 - 11172.328: 10.2977% ( 116) 00:36:22.010 11172.328 - 11234.743: 11.4911% ( 97) 00:36:22.010 11234.743 - 11297.158: 12.7461% ( 102) 00:36:22.010 11297.158 - 11359.573: 14.0748% ( 108) 00:36:22.010 11359.573 - 11421.989: 15.3789% ( 106) 00:36:22.010 11421.989 - 11484.404: 16.8676% ( 121) 00:36:22.010 11484.404 - 11546.819: 18.5778% ( 139) 00:36:22.010 11546.819 - 11609.234: 20.0910% ( 123) 00:36:22.010 11609.234 - 11671.650: 21.8873% ( 146) 00:36:22.010 11671.650 - 11734.065: 23.5236% ( 133) 00:36:22.010 11734.065 - 11796.480: 25.3937% ( 152) 00:36:22.010 11796.480 - 11858.895: 27.3007% ( 155) 00:36:22.010 11858.895 - 11921.310: 29.5399% ( 182) 00:36:22.010 11921.310 - 11983.726: 31.2254% ( 137) 00:36:22.010 11983.726 - 12046.141: 33.0217% ( 146) 00:36:22.010 12046.141 - 12108.556: 35.3346% ( 188) 00:36:22.010 12108.556 - 12170.971: 37.4877% ( 175) 00:36:22.010 12170.971 - 12233.387: 39.3701% ( 153) 00:36:22.010 12233.387 - 12295.802: 41.0556% ( 137) 00:36:22.010 12295.802 - 12358.217: 42.5935% ( 125) 00:36:22.010 12358.217 - 12420.632: 44.1068% ( 123) 00:36:22.010 12420.632 - 12483.048: 45.7308% ( 132) 00:36:22.010 12483.048 - 12545.463: 47.3425% ( 131) 00:36:22.010 12545.463 - 12607.878: 48.8312% ( 121) 00:36:22.010 12607.878 - 12670.293: 50.1969% ( 111) 00:36:22.010 12670.293 - 12732.709: 51.6363% ( 117) 00:36:22.010 12732.709 - 12795.124: 52.6944% ( 86) 00:36:22.010 12795.124 - 12857.539: 53.4449% ( 61) 00:36:22.010 12857.539 - 12919.954: 54.2200% ( 63) 00:36:22.010 12919.954 - 12982.370: 54.9336% ( 58) 00:36:22.010 12982.370 - 13044.785: 55.7948% ( 70) 00:36:22.010 13044.785 - 13107.200: 56.5699% ( 63) 00:36:22.010 13107.200 - 13169.615: 57.3819% ( 66) 00:36:22.010 13169.615 - 13232.030: 58.1324% ( 61) 00:36:22.010 13232.030 - 13294.446: 58.7968% ( 54) 00:36:22.010 13294.446 - 13356.861: 59.4980% ( 57) 00:36:22.010 13356.861 - 13419.276: 59.9656% ( 38) 00:36:22.010 13419.276 - 13481.691: 60.3346% ( 30) 00:36:22.010 13481.691 - 13544.107: 60.6299% ( 24) 00:36:22.010 13544.107 - 13606.522: 60.8760% ( 20) 00:36:22.010 13606.522 - 13668.937: 61.1097% ( 19) 00:36:22.010 13668.937 - 13731.352: 61.3681% ( 21) 00:36:22.010 13731.352 - 13793.768: 61.6265% ( 21) 00:36:22.010 13793.768 - 13856.183: 61.8602% ( 19) 00:36:22.010 13856.183 - 13918.598: 62.1309% ( 22) 00:36:22.010 13918.598 - 13981.013: 62.3278% ( 16) 00:36:22.010 13981.013 - 14043.429: 62.4508% ( 10) 00:36:22.010 14043.429 - 14105.844: 62.5615% ( 9) 00:36:22.010 14105.844 - 14168.259: 62.6353% ( 6) 00:36:22.010 14168.259 - 14230.674: 62.7338% ( 8) 00:36:22.010 14230.674 - 14293.090: 62.8322% ( 8) 00:36:22.010 14293.090 - 14355.505: 62.8937% ( 5) 00:36:22.010 14355.505 - 14417.920: 63.0167% ( 10) 00:36:22.010 14417.920 - 14480.335: 63.0906% ( 6) 00:36:22.010 14480.335 - 14542.750: 63.2628% ( 14) 00:36:22.010 14542.750 - 14605.166: 63.3735% ( 9) 00:36:22.010 14605.166 - 14667.581: 63.5581% ( 15) 00:36:22.010 14667.581 - 14729.996: 63.6934% ( 11) 00:36:22.010 14729.996 - 14792.411: 63.8287% ( 11) 00:36:22.010 14792.411 - 14854.827: 63.9395% ( 9) 00:36:22.010 14854.827 - 14917.242: 64.0748% ( 11) 00:36:22.010 14917.242 - 14979.657: 64.2224% ( 12) 00:36:22.010 14979.657 - 15042.072: 64.3086% ( 7) 00:36:22.010 15042.072 - 15104.488: 64.3455% ( 3) 00:36:22.010 15104.488 - 15166.903: 64.3824% ( 3) 00:36:22.010 15166.903 - 15229.318: 64.4070% ( 2) 00:36:22.010 15229.318 - 15291.733: 64.4439% ( 3) 00:36:22.010 15291.733 - 15354.149: 64.4808% ( 3) 00:36:22.010 15354.149 - 15416.564: 64.5054% ( 2) 00:36:22.010 15416.564 - 15478.979: 64.5423% ( 3) 00:36:22.010 15478.979 - 15541.394: 64.5669% ( 2) 00:36:22.010 15603.810 - 15666.225: 64.6284% ( 5) 00:36:22.010 15666.225 - 15728.640: 64.8376% ( 17) 00:36:22.010 15728.640 - 15791.055: 64.8499% ( 1) 00:36:22.010 15791.055 - 15853.470: 64.8745% ( 2) 00:36:22.010 15853.470 - 15915.886: 64.8868% ( 1) 00:36:22.010 15915.886 - 15978.301: 64.8991% ( 1) 00:36:22.010 15978.301 - 16103.131: 64.9360% ( 3) 00:36:22.010 16103.131 - 16227.962: 64.9729% ( 3) 00:36:22.010 16227.962 - 16352.792: 65.0098% ( 3) 00:36:22.010 16352.792 - 16477.623: 65.0468% ( 3) 00:36:22.010 16477.623 - 16602.453: 65.0837% ( 3) 00:36:22.010 16602.453 - 16727.284: 65.1329% ( 4) 00:36:22.010 16727.284 - 16852.114: 65.1575% ( 2) 00:36:22.010 16852.114 - 16976.945: 65.2067% ( 4) 00:36:22.010 16976.945 - 17101.775: 65.2436% ( 3) 00:36:22.010 17101.775 - 17226.606: 65.2928% ( 4) 00:36:22.010 17226.606 - 17351.436: 65.3297% ( 3) 00:36:22.010 17351.436 - 17476.267: 65.3543% ( 2) 00:36:22.010 19598.385 - 19723.215: 65.3666% ( 1) 00:36:22.010 19723.215 - 19848.046: 65.3789% ( 1) 00:36:22.010 19848.046 - 19972.876: 65.6373% ( 21) 00:36:22.010 19972.876 - 20097.707: 66.5846% ( 77) 00:36:22.010 20097.707 - 20222.537: 67.4951% ( 74) 00:36:22.010 20222.537 - 20347.368: 68.5285% ( 84) 00:36:22.010 20347.368 - 20472.198: 69.6850% ( 94) 00:36:22.010 20472.198 - 20597.029: 70.9277% ( 101) 00:36:22.010 20597.029 - 20721.859: 72.3794% ( 118) 00:36:22.010 20721.859 - 20846.690: 73.6220% ( 101) 00:36:22.010 20846.690 - 20971.520: 74.9016% ( 104) 00:36:22.010 20971.520 - 21096.350: 76.3533% ( 118) 00:36:22.010 21096.350 - 21221.181: 77.5837% ( 100) 00:36:22.010 21221.181 - 21346.011: 79.0477% ( 119) 00:36:22.010 21346.011 - 21470.842: 80.4749% ( 116) 00:36:22.010 21470.842 - 21595.672: 82.8617% ( 194) 00:36:22.010 21595.672 - 21720.503: 85.0025% ( 174) 00:36:22.010 21720.503 - 21845.333: 86.4911% ( 121) 00:36:22.010 21845.333 - 21970.164: 88.0290% ( 125) 00:36:22.010 21970.164 - 22094.994: 89.2963% ( 103) 00:36:22.010 22094.994 - 22219.825: 90.7234% ( 116) 00:36:22.010 22219.825 - 22344.655: 92.1137% ( 113) 00:36:22.010 22344.655 - 22469.486: 93.0979% ( 80) 00:36:22.010 22469.486 - 22594.316: 93.9222% ( 67) 00:36:22.010 22594.316 - 22719.147: 94.6850% ( 62) 00:36:22.010 22719.147 - 22843.977: 95.2879% ( 49) 00:36:22.010 22843.977 - 22968.808: 95.8292% ( 44) 00:36:22.010 22968.808 - 23093.638: 96.6289% ( 65) 00:36:22.010 23093.638 - 23218.469: 97.2318% ( 49) 00:36:22.010 23218.469 - 23343.299: 97.5148% ( 23) 00:36:22.010 23343.299 - 23468.130: 97.7362% ( 18) 00:36:22.010 23468.130 - 23592.960: 97.9331% ( 16) 00:36:22.010 23592.960 - 23717.790: 98.0684% ( 11) 00:36:22.010 23717.790 - 23842.621: 98.2283% ( 13) 00:36:22.010 23842.621 - 23967.451: 98.3145% ( 7) 00:36:22.010 23967.451 - 24092.282: 98.3637% ( 4) 00:36:22.010 24092.282 - 24217.112: 98.4006% ( 3) 00:36:22.010 24217.112 - 24341.943: 98.4252% ( 2) 00:36:22.010 38198.126 - 38447.787: 98.5974% ( 14) 00:36:22.010 38447.787 - 38697.448: 98.7943% ( 16) 00:36:22.010 38697.448 - 38947.109: 98.8189% ( 2) 00:36:22.010 38947.109 - 39196.770: 98.8558% ( 3) 00:36:22.010 39196.770 - 39446.430: 98.8804% ( 2) 00:36:22.010 39446.430 - 39696.091: 98.9050% ( 2) 00:36:22.010 39696.091 - 39945.752: 98.9419% ( 3) 00:36:22.010 39945.752 - 40195.413: 98.9788% ( 3) 00:36:22.010 40195.413 - 40445.074: 99.0281% ( 4) 00:36:22.010 40445.074 - 40694.735: 99.0773% ( 4) 00:36:22.010 40694.735 - 40944.396: 99.1388% ( 5) 00:36:22.010 40944.396 - 41194.057: 99.1880% ( 4) 00:36:22.010 41194.057 - 41443.718: 99.2126% ( 2) 00:36:22.010 50930.834 - 51180.495: 99.2249% ( 1) 00:36:22.010 51180.495 - 51430.156: 99.2741% ( 4) 00:36:22.010 51430.156 - 51679.817: 99.3233% ( 4) 00:36:22.010 51679.817 - 51929.478: 99.3725% ( 4) 00:36:22.010 51929.478 - 52179.139: 99.4341% ( 5) 00:36:22.010 52179.139 - 52428.800: 99.4956% ( 5) 00:36:22.010 52428.800 - 52678.461: 99.5448% ( 4) 00:36:22.010 52678.461 - 52928.122: 99.5940% ( 4) 00:36:22.010 52928.122 - 53177.783: 99.6432% ( 4) 00:36:22.010 53177.783 - 53427.444: 99.6924% ( 4) 00:36:22.010 53427.444 - 53677.105: 99.7539% ( 5) 00:36:22.010 53677.105 - 53926.766: 99.7908% ( 3) 00:36:22.010 53926.766 - 54176.427: 99.8524% ( 5) 00:36:22.010 54176.427 - 54426.088: 99.9016% ( 4) 00:36:22.010 54426.088 - 54675.749: 99.9508% ( 4) 00:36:22.010 54675.749 - 54925.410: 100.0000% ( 4) 00:36:22.010 00:36:22.010 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:36:22.010 ============================================================================== 00:36:22.010 Range in us Cumulative IO count 00:36:22.010 9674.362 - 9736.777: 0.0123% ( 1) 00:36:22.010 10048.853 - 10111.269: 0.0246% ( 1) 00:36:22.010 10111.269 - 10173.684: 0.1107% ( 7) 00:36:22.010 10173.684 - 10236.099: 0.1722% ( 5) 00:36:22.010 10236.099 - 10298.514: 0.2215% ( 4) 00:36:22.010 10298.514 - 10360.930: 0.3814% ( 13) 00:36:22.010 10360.930 - 10423.345: 0.6152% ( 19) 00:36:22.010 10423.345 - 10485.760: 0.9104% ( 24) 00:36:22.010 10485.760 - 10548.175: 1.6117% ( 57) 00:36:22.010 10548.175 - 10610.590: 2.0423% ( 35) 00:36:22.010 10610.590 - 10673.006: 2.4483% ( 33) 00:36:22.010 10673.006 - 10735.421: 3.1004% ( 53) 00:36:22.010 10735.421 - 10797.836: 3.7402% ( 52) 00:36:22.010 10797.836 - 10860.251: 4.5153% ( 63) 00:36:22.010 10860.251 - 10922.667: 5.1919% ( 55) 00:36:22.010 10922.667 - 10985.082: 5.9916% ( 65) 00:36:22.010 10985.082 - 11047.497: 7.0743% ( 88) 00:36:22.010 11047.497 - 11109.912: 7.9232% ( 69) 00:36:22.010 11109.912 - 11172.328: 9.0428% ( 91) 00:36:22.010 11172.328 - 11234.743: 10.4454% ( 114) 00:36:22.010 11234.743 - 11297.158: 11.9218% ( 120) 00:36:22.010 11297.158 - 11359.573: 13.5827% ( 135) 00:36:22.010 11359.573 - 11421.989: 15.2805% ( 138) 00:36:22.010 11421.989 - 11484.404: 16.6216% ( 109) 00:36:22.010 11484.404 - 11546.819: 18.3686% ( 142) 00:36:22.010 11546.819 - 11609.234: 20.3494% ( 161) 00:36:22.011 11609.234 - 11671.650: 21.9488% ( 130) 00:36:22.011 11671.650 - 11734.065: 23.5113% ( 127) 00:36:22.011 11734.065 - 11796.480: 25.2461% ( 141) 00:36:22.011 11796.480 - 11858.895: 27.2884% ( 166) 00:36:22.011 11858.895 - 11921.310: 28.9124% ( 132) 00:36:22.011 11921.310 - 11983.726: 30.7333% ( 148) 00:36:22.011 11983.726 - 12046.141: 32.5418% ( 147) 00:36:22.011 12046.141 - 12108.556: 34.4980% ( 159) 00:36:22.011 12108.556 - 12170.971: 36.6757% ( 177) 00:36:22.011 12170.971 - 12233.387: 38.6196% ( 158) 00:36:22.011 12233.387 - 12295.802: 40.4774% ( 151) 00:36:22.011 12295.802 - 12358.217: 42.2982% ( 148) 00:36:22.011 12358.217 - 12420.632: 44.5497% ( 183) 00:36:22.011 12420.632 - 12483.048: 46.5182% ( 160) 00:36:22.011 12483.048 - 12545.463: 47.9577% ( 117) 00:36:22.011 12545.463 - 12607.878: 49.0650% ( 90) 00:36:22.011 12607.878 - 12670.293: 50.3445% ( 104) 00:36:22.011 12670.293 - 12732.709: 51.3656% ( 83) 00:36:22.011 12732.709 - 12795.124: 52.2638% ( 73) 00:36:22.011 12795.124 - 12857.539: 53.0635% ( 65) 00:36:22.011 12857.539 - 12919.954: 53.9247% ( 70) 00:36:22.011 12919.954 - 12982.370: 54.9705% ( 85) 00:36:22.011 12982.370 - 13044.785: 56.2377% ( 103) 00:36:22.011 13044.785 - 13107.200: 57.1973% ( 78) 00:36:22.011 13107.200 - 13169.615: 58.0217% ( 67) 00:36:22.011 13169.615 - 13232.030: 58.9075% ( 72) 00:36:22.011 13232.030 - 13294.446: 59.6211% ( 58) 00:36:22.011 13294.446 - 13356.861: 60.1009% ( 39) 00:36:22.011 13356.861 - 13419.276: 60.5438% ( 36) 00:36:22.011 13419.276 - 13481.691: 60.8391% ( 24) 00:36:22.011 13481.691 - 13544.107: 61.1097% ( 22) 00:36:22.011 13544.107 - 13606.522: 61.4911% ( 31) 00:36:22.011 13606.522 - 13668.937: 61.7864% ( 24) 00:36:22.011 13668.937 - 13731.352: 61.9464% ( 13) 00:36:22.011 13731.352 - 13793.768: 62.0694% ( 10) 00:36:22.011 13793.768 - 13856.183: 62.1432% ( 6) 00:36:22.011 13856.183 - 13918.598: 62.2539% ( 9) 00:36:22.011 13918.598 - 13981.013: 62.3647% ( 9) 00:36:22.011 13981.013 - 14043.429: 62.4754% ( 9) 00:36:22.011 14043.429 - 14105.844: 62.5738% ( 8) 00:36:22.011 14105.844 - 14168.259: 62.6599% ( 7) 00:36:22.011 14168.259 - 14230.674: 62.7338% ( 6) 00:36:22.011 14230.674 - 14293.090: 62.7953% ( 5) 00:36:22.011 14293.090 - 14355.505: 62.9306% ( 11) 00:36:22.011 14355.505 - 14417.920: 63.0659% ( 11) 00:36:22.011 14417.920 - 14480.335: 63.2382% ( 14) 00:36:22.011 14480.335 - 14542.750: 63.3612% ( 10) 00:36:22.011 14542.750 - 14605.166: 63.6442% ( 23) 00:36:22.011 14605.166 - 14667.581: 64.0256% ( 31) 00:36:22.011 14667.581 - 14729.996: 64.1240% ( 8) 00:36:22.011 14729.996 - 14792.411: 64.2224% ( 8) 00:36:22.011 14792.411 - 14854.827: 64.3209% ( 8) 00:36:22.011 14854.827 - 14917.242: 64.3824% ( 5) 00:36:22.011 14917.242 - 14979.657: 64.4562% ( 6) 00:36:22.011 14979.657 - 15042.072: 64.4931% ( 3) 00:36:22.011 15042.072 - 15104.488: 64.5177% ( 2) 00:36:22.011 15104.488 - 15166.903: 64.5546% ( 3) 00:36:22.011 15166.903 - 15229.318: 64.5669% ( 1) 00:36:22.011 15229.318 - 15291.733: 64.6777% ( 9) 00:36:22.011 15291.733 - 15354.149: 64.7023% ( 2) 00:36:22.011 15354.149 - 15416.564: 64.7146% ( 1) 00:36:22.011 15416.564 - 15478.979: 64.7392% ( 2) 00:36:22.011 15478.979 - 15541.394: 64.7515% ( 1) 00:36:22.011 15541.394 - 15603.810: 64.7761% ( 2) 00:36:22.011 15603.810 - 15666.225: 64.8007% ( 2) 00:36:22.011 15666.225 - 15728.640: 64.8130% ( 1) 00:36:22.011 15978.301 - 16103.131: 64.9483% ( 11) 00:36:22.011 16103.131 - 16227.962: 65.0098% ( 5) 00:36:22.011 16227.962 - 16352.792: 65.0468% ( 3) 00:36:22.011 16352.792 - 16477.623: 65.0837% ( 3) 00:36:22.011 16477.623 - 16602.453: 65.1329% ( 4) 00:36:22.011 16602.453 - 16727.284: 65.1698% ( 3) 00:36:22.011 16727.284 - 16852.114: 65.2067% ( 3) 00:36:22.011 16852.114 - 16976.945: 65.2436% ( 3) 00:36:22.011 16976.945 - 17101.775: 65.2928% ( 4) 00:36:22.011 17101.775 - 17226.606: 65.3297% ( 3) 00:36:22.011 17226.606 - 17351.436: 65.3543% ( 2) 00:36:22.011 19348.724 - 19473.554: 65.4158% ( 5) 00:36:22.011 19473.554 - 19598.385: 65.6127% ( 16) 00:36:22.011 19598.385 - 19723.215: 66.1294% ( 42) 00:36:22.011 19723.215 - 19848.046: 66.4124% ( 23) 00:36:22.011 19848.046 - 19972.876: 66.8922% ( 39) 00:36:22.011 19972.876 - 20097.707: 67.5689% ( 55) 00:36:22.011 20097.707 - 20222.537: 68.3317% ( 62) 00:36:22.011 20222.537 - 20347.368: 69.1683% ( 68) 00:36:22.011 20347.368 - 20472.198: 70.2633% ( 89) 00:36:22.011 20472.198 - 20597.029: 71.3706% ( 90) 00:36:22.011 20597.029 - 20721.859: 72.5640% ( 97) 00:36:22.011 20721.859 - 20846.690: 73.7328% ( 95) 00:36:22.011 20846.690 - 20971.520: 74.9508% ( 99) 00:36:22.011 20971.520 - 21096.350: 76.2918% ( 109) 00:36:22.011 21096.350 - 21221.181: 77.5960% ( 106) 00:36:22.011 21221.181 - 21346.011: 78.8017% ( 98) 00:36:22.011 21346.011 - 21470.842: 80.4011% ( 130) 00:36:22.011 21470.842 - 21595.672: 82.2589% ( 151) 00:36:22.011 21595.672 - 21720.503: 83.8460% ( 129) 00:36:22.011 21720.503 - 21845.333: 85.3716% ( 124) 00:36:22.011 21845.333 - 21970.164: 87.0940% ( 140) 00:36:22.011 21970.164 - 22094.994: 88.6565% ( 127) 00:36:22.011 22094.994 - 22219.825: 90.2928% ( 133) 00:36:22.011 22219.825 - 22344.655: 91.8922% ( 130) 00:36:22.011 22344.655 - 22469.486: 92.9995% ( 90) 00:36:22.011 22469.486 - 22594.316: 93.9345% ( 76) 00:36:22.011 22594.316 - 22719.147: 95.0664% ( 92) 00:36:22.011 22719.147 - 22843.977: 95.7308% ( 54) 00:36:22.011 22843.977 - 22968.808: 96.5305% ( 65) 00:36:22.011 22968.808 - 23093.638: 97.0349% ( 41) 00:36:22.011 23093.638 - 23218.469: 97.6624% ( 51) 00:36:22.011 23218.469 - 23343.299: 97.8593% ( 16) 00:36:22.011 23343.299 - 23468.130: 98.0069% ( 12) 00:36:22.011 23468.130 - 23592.960: 98.1176% ( 9) 00:36:22.011 23592.960 - 23717.790: 98.2037% ( 7) 00:36:22.011 23717.790 - 23842.621: 98.3145% ( 9) 00:36:22.011 23842.621 - 23967.451: 98.4006% ( 7) 00:36:22.011 23967.451 - 24092.282: 98.4252% ( 2) 00:36:22.011 35451.855 - 35701.516: 98.4375% ( 1) 00:36:22.011 35701.516 - 35951.177: 98.4867% ( 4) 00:36:22.011 35951.177 - 36200.838: 98.6344% ( 12) 00:36:22.011 36200.838 - 36450.499: 98.6713% ( 3) 00:36:22.011 36450.499 - 36700.160: 98.7082% ( 3) 00:36:22.011 36700.160 - 36949.821: 98.7451% ( 3) 00:36:22.011 36949.821 - 37199.482: 98.7943% ( 4) 00:36:22.011 37199.482 - 37449.143: 98.8435% ( 4) 00:36:22.011 37449.143 - 37698.804: 98.8927% ( 4) 00:36:22.011 37698.804 - 37948.465: 98.9419% ( 4) 00:36:22.011 37948.465 - 38198.126: 98.9911% ( 4) 00:36:22.011 38198.126 - 38447.787: 99.0404% ( 4) 00:36:22.011 38447.787 - 38697.448: 99.0896% ( 4) 00:36:22.011 38697.448 - 38947.109: 99.1388% ( 4) 00:36:22.011 38947.109 - 39196.770: 99.2003% ( 5) 00:36:22.011 39196.770 - 39446.430: 99.2126% ( 1) 00:36:22.011 48683.886 - 48933.547: 99.2495% ( 3) 00:36:22.011 48933.547 - 49183.208: 99.3110% ( 5) 00:36:22.011 49183.208 - 49432.869: 99.3479% ( 3) 00:36:22.011 49432.869 - 49682.530: 99.4094% ( 5) 00:36:22.011 49682.530 - 49932.190: 99.4587% ( 4) 00:36:22.011 49932.190 - 50181.851: 99.5079% ( 4) 00:36:22.011 50181.851 - 50431.512: 99.5571% ( 4) 00:36:22.011 50431.512 - 50681.173: 99.6186% ( 5) 00:36:22.011 50681.173 - 50930.834: 99.6678% ( 4) 00:36:22.011 50930.834 - 51180.495: 99.7047% ( 3) 00:36:22.011 51180.495 - 51430.156: 99.7662% ( 5) 00:36:22.011 51430.156 - 51679.817: 99.8155% ( 4) 00:36:22.011 51679.817 - 51929.478: 99.8647% ( 4) 00:36:22.011 51929.478 - 52179.139: 99.9139% ( 4) 00:36:22.011 52179.139 - 52428.800: 99.9754% ( 5) 00:36:22.011 52428.800 - 52678.461: 100.0000% ( 2) 00:36:22.011 00:36:22.011 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:36:22.011 ============================================================================== 00:36:22.011 Range in us Cumulative IO count 00:36:22.011 9986.438 - 10048.853: 0.0123% ( 1) 00:36:22.011 10048.853 - 10111.269: 0.0615% ( 4) 00:36:22.011 10111.269 - 10173.684: 0.1230% ( 5) 00:36:22.011 10173.684 - 10236.099: 0.1599% ( 3) 00:36:22.011 10236.099 - 10298.514: 0.2338% ( 6) 00:36:22.011 10298.514 - 10360.930: 0.3076% ( 6) 00:36:22.011 10360.930 - 10423.345: 0.4552% ( 12) 00:36:22.011 10423.345 - 10485.760: 0.6767% ( 18) 00:36:22.011 10485.760 - 10548.175: 0.9473% ( 22) 00:36:22.011 10548.175 - 10610.590: 1.4518% ( 41) 00:36:22.011 10610.590 - 10673.006: 1.9685% ( 42) 00:36:22.011 10673.006 - 10735.421: 2.4237% ( 37) 00:36:22.011 10735.421 - 10797.836: 3.0389% ( 50) 00:36:22.011 10797.836 - 10860.251: 3.7771% ( 60) 00:36:22.011 10860.251 - 10922.667: 4.7736% ( 81) 00:36:22.011 10922.667 - 10985.082: 5.8194% ( 85) 00:36:22.011 10985.082 - 11047.497: 6.9267% ( 90) 00:36:22.011 11047.497 - 11109.912: 8.1693% ( 101) 00:36:22.011 11109.912 - 11172.328: 9.0797% ( 74) 00:36:22.011 11172.328 - 11234.743: 10.2731% ( 97) 00:36:22.011 11234.743 - 11297.158: 12.0325% ( 143) 00:36:22.011 11297.158 - 11359.573: 13.3735% ( 109) 00:36:22.011 11359.573 - 11421.989: 14.9975% ( 132) 00:36:22.011 11421.989 - 11484.404: 16.6954% ( 138) 00:36:22.011 11484.404 - 11546.819: 18.6024% ( 155) 00:36:22.011 11546.819 - 11609.234: 20.3863% ( 145) 00:36:22.011 11609.234 - 11671.650: 22.2318% ( 150) 00:36:22.011 11671.650 - 11734.065: 23.8681% ( 133) 00:36:22.011 11734.065 - 11796.480: 25.8120% ( 158) 00:36:22.011 11796.480 - 11858.895: 27.6821% ( 152) 00:36:22.011 11858.895 - 11921.310: 29.7490% ( 168) 00:36:22.011 11921.310 - 11983.726: 31.5207% ( 144) 00:36:22.011 11983.726 - 12046.141: 33.5999% ( 169) 00:36:22.011 12046.141 - 12108.556: 35.7899% ( 178) 00:36:22.011 12108.556 - 12170.971: 37.8445% ( 167) 00:36:22.011 12170.971 - 12233.387: 39.5546% ( 139) 00:36:22.011 12233.387 - 12295.802: 41.5477% ( 162) 00:36:22.011 12295.802 - 12358.217: 43.6270% ( 169) 00:36:22.012 12358.217 - 12420.632: 45.3740% ( 142) 00:36:22.012 12420.632 - 12483.048: 47.0842% ( 139) 00:36:22.012 12483.048 - 12545.463: 48.4867% ( 114) 00:36:22.012 12545.463 - 12607.878: 49.7170% ( 100) 00:36:22.012 12607.878 - 12670.293: 50.8366% ( 91) 00:36:22.012 12670.293 - 12732.709: 51.8332% ( 81) 00:36:22.012 12732.709 - 12795.124: 53.1742% ( 109) 00:36:22.012 12795.124 - 12857.539: 54.0846% ( 74) 00:36:22.012 12857.539 - 12919.954: 55.0689% ( 80) 00:36:22.012 12919.954 - 12982.370: 55.8809% ( 66) 00:36:22.012 12982.370 - 13044.785: 56.8282% ( 77) 00:36:22.012 13044.785 - 13107.200: 57.6280% ( 65) 00:36:22.012 13107.200 - 13169.615: 58.2554% ( 51) 00:36:22.012 13169.615 - 13232.030: 58.8214% ( 46) 00:36:22.012 13232.030 - 13294.446: 59.3135% ( 40) 00:36:22.012 13294.446 - 13356.861: 59.7564% ( 36) 00:36:22.012 13356.861 - 13419.276: 60.1747% ( 34) 00:36:22.012 13419.276 - 13481.691: 60.5192% ( 28) 00:36:22.012 13481.691 - 13544.107: 60.7653% ( 20) 00:36:22.012 13544.107 - 13606.522: 60.9744% ( 17) 00:36:22.012 13606.522 - 13668.937: 61.2082% ( 19) 00:36:22.012 13668.937 - 13731.352: 61.3804% ( 14) 00:36:22.012 13731.352 - 13793.768: 61.5650% ( 15) 00:36:22.012 13793.768 - 13856.183: 61.7741% ( 17) 00:36:22.012 13856.183 - 13918.598: 61.9094% ( 11) 00:36:22.012 13918.598 - 13981.013: 62.0202% ( 9) 00:36:22.012 13981.013 - 14043.429: 62.2047% ( 15) 00:36:22.012 14043.429 - 14105.844: 62.3770% ( 14) 00:36:22.012 14105.844 - 14168.259: 62.5246% ( 12) 00:36:22.012 14168.259 - 14230.674: 62.6845% ( 13) 00:36:22.012 14230.674 - 14293.090: 62.8568% ( 14) 00:36:22.012 14293.090 - 14355.505: 63.0167% ( 13) 00:36:22.012 14355.505 - 14417.920: 63.0906% ( 6) 00:36:22.012 14417.920 - 14480.335: 63.1767% ( 7) 00:36:22.012 14480.335 - 14542.750: 63.2382% ( 5) 00:36:22.012 14542.750 - 14605.166: 63.4350% ( 16) 00:36:22.012 14605.166 - 14667.581: 63.6442% ( 17) 00:36:22.012 14667.581 - 14729.996: 63.7180% ( 6) 00:36:22.012 14729.996 - 14792.411: 63.7672% ( 4) 00:36:22.012 14792.411 - 14854.827: 63.8410% ( 6) 00:36:22.012 14854.827 - 14917.242: 63.8903% ( 4) 00:36:22.012 14917.242 - 14979.657: 63.9518% ( 5) 00:36:22.012 14979.657 - 15042.072: 64.0010% ( 4) 00:36:22.012 15042.072 - 15104.488: 64.0256% ( 2) 00:36:22.012 15104.488 - 15166.903: 64.0502% ( 2) 00:36:22.012 15166.903 - 15229.318: 64.0748% ( 2) 00:36:22.012 15229.318 - 15291.733: 64.0994% ( 2) 00:36:22.012 15291.733 - 15354.149: 64.1240% ( 2) 00:36:22.012 15354.149 - 15416.564: 64.1363% ( 1) 00:36:22.012 15416.564 - 15478.979: 64.1732% ( 3) 00:36:22.012 15478.979 - 15541.394: 64.2347% ( 5) 00:36:22.012 15541.394 - 15603.810: 64.3947% ( 13) 00:36:22.012 15603.810 - 15666.225: 64.5423% ( 12) 00:36:22.012 15666.225 - 15728.640: 64.7269% ( 15) 00:36:22.012 15728.640 - 15791.055: 64.8253% ( 8) 00:36:22.012 15791.055 - 15853.470: 64.8991% ( 6) 00:36:22.012 15853.470 - 15915.886: 64.9606% ( 5) 00:36:22.012 15915.886 - 15978.301: 65.0098% ( 4) 00:36:22.012 15978.301 - 16103.131: 65.1083% ( 8) 00:36:22.012 16103.131 - 16227.962: 65.2067% ( 8) 00:36:22.012 16227.962 - 16352.792: 65.2436% ( 3) 00:36:22.012 16352.792 - 16477.623: 65.2805% ( 3) 00:36:22.012 16477.623 - 16602.453: 65.3297% ( 4) 00:36:22.012 16602.453 - 16727.284: 65.3543% ( 2) 00:36:22.012 18350.080 - 18474.910: 65.4158% ( 5) 00:36:22.012 18474.910 - 18599.741: 65.5266% ( 9) 00:36:22.012 18599.741 - 18724.571: 65.5635% ( 3) 00:36:22.012 18724.571 - 18849.402: 65.6004% ( 3) 00:36:22.012 18849.402 - 18974.232: 65.6373% ( 3) 00:36:22.012 18974.232 - 19099.063: 65.6742% ( 3) 00:36:22.012 19099.063 - 19223.893: 65.7111% ( 3) 00:36:22.012 19223.893 - 19348.724: 65.7603% ( 4) 00:36:22.012 19348.724 - 19473.554: 65.7972% ( 3) 00:36:22.012 19473.554 - 19598.385: 65.8465% ( 4) 00:36:22.012 19598.385 - 19723.215: 66.0556% ( 17) 00:36:22.012 19723.215 - 19848.046: 66.3386% ( 23) 00:36:22.012 19848.046 - 19972.876: 66.8061% ( 38) 00:36:22.012 19972.876 - 20097.707: 67.3597% ( 45) 00:36:22.012 20097.707 - 20222.537: 67.9626% ( 49) 00:36:22.012 20222.537 - 20347.368: 68.8853% ( 75) 00:36:22.012 20347.368 - 20472.198: 69.8573% ( 79) 00:36:22.012 20472.198 - 20597.029: 70.9031% ( 85) 00:36:22.012 20597.029 - 20721.859: 72.1211% ( 99) 00:36:22.012 20721.859 - 20846.690: 73.3145% ( 97) 00:36:22.012 20846.690 - 20971.520: 74.6801% ( 111) 00:36:22.012 20971.520 - 21096.350: 76.3410% ( 135) 00:36:22.012 21096.350 - 21221.181: 77.7805% ( 117) 00:36:22.012 21221.181 - 21346.011: 79.4414% ( 135) 00:36:22.012 21346.011 - 21470.842: 80.9670% ( 124) 00:36:22.012 21470.842 - 21595.672: 82.6649% ( 138) 00:36:22.012 21595.672 - 21720.503: 84.3012% ( 133) 00:36:22.012 21720.503 - 21845.333: 86.1836% ( 153) 00:36:22.012 21845.333 - 21970.164: 88.3858% ( 179) 00:36:22.012 21970.164 - 22094.994: 89.6161% ( 100) 00:36:22.012 22094.994 - 22219.825: 91.0556% ( 117) 00:36:22.012 22219.825 - 22344.655: 92.1137% ( 86) 00:36:22.012 22344.655 - 22469.486: 93.0241% ( 74) 00:36:22.012 22469.486 - 22594.316: 94.2052% ( 96) 00:36:22.012 22594.316 - 22719.147: 94.9680% ( 62) 00:36:22.012 22719.147 - 22843.977: 95.6201% ( 53) 00:36:22.012 22843.977 - 22968.808: 96.3583% ( 60) 00:36:22.012 22968.808 - 23093.638: 96.7889% ( 35) 00:36:22.012 23093.638 - 23218.469: 97.3179% ( 43) 00:36:22.012 23218.469 - 23343.299: 97.7608% ( 36) 00:36:22.012 23343.299 - 23468.130: 97.9577% ( 16) 00:36:22.012 23468.130 - 23592.960: 98.0561% ( 8) 00:36:22.012 23592.960 - 23717.790: 98.1914% ( 11) 00:36:22.012 23717.790 - 23842.621: 98.2899% ( 8) 00:36:22.012 23842.621 - 23967.451: 98.3514% ( 5) 00:36:22.012 23967.451 - 24092.282: 98.4006% ( 4) 00:36:22.012 24092.282 - 24217.112: 98.4252% ( 2) 00:36:22.012 32955.246 - 33204.907: 98.4744% ( 4) 00:36:22.012 33204.907 - 33454.568: 98.5236% ( 4) 00:36:22.012 33454.568 - 33704.229: 98.5728% ( 4) 00:36:22.012 33704.229 - 33953.890: 98.6220% ( 4) 00:36:22.012 33953.890 - 34203.550: 98.6713% ( 4) 00:36:22.012 34203.550 - 34453.211: 98.7205% ( 4) 00:36:22.012 34453.211 - 34702.872: 98.7697% ( 4) 00:36:22.012 34702.872 - 34952.533: 98.8312% ( 5) 00:36:22.012 34952.533 - 35202.194: 98.8804% ( 4) 00:36:22.012 35202.194 - 35451.855: 98.9296% ( 4) 00:36:22.012 35451.855 - 35701.516: 98.9911% ( 5) 00:36:22.012 35701.516 - 35951.177: 99.0404% ( 4) 00:36:22.012 35951.177 - 36200.838: 99.0896% ( 4) 00:36:22.012 36200.838 - 36450.499: 99.1388% ( 4) 00:36:22.012 36450.499 - 36700.160: 99.2003% ( 5) 00:36:22.012 36700.160 - 36949.821: 99.2126% ( 1) 00:36:22.012 45937.615 - 46187.276: 99.2249% ( 1) 00:36:22.012 46187.276 - 46436.937: 99.2741% ( 4) 00:36:22.012 46436.937 - 46686.598: 99.3233% ( 4) 00:36:22.012 46686.598 - 46936.259: 99.3725% ( 4) 00:36:22.012 46936.259 - 47185.920: 99.4341% ( 5) 00:36:22.012 47185.920 - 47435.581: 99.4833% ( 4) 00:36:22.012 47435.581 - 47685.242: 99.5325% ( 4) 00:36:22.012 47685.242 - 47934.903: 99.5817% ( 4) 00:36:22.012 47934.903 - 48184.564: 99.6432% ( 5) 00:36:22.012 48184.564 - 48434.225: 99.6924% ( 4) 00:36:22.012 48434.225 - 48683.886: 99.7539% ( 5) 00:36:22.012 48683.886 - 48933.547: 99.8031% ( 4) 00:36:22.012 48933.547 - 49183.208: 99.8524% ( 4) 00:36:22.012 49183.208 - 49432.869: 99.8893% ( 3) 00:36:22.012 49432.869 - 49682.530: 99.9508% ( 5) 00:36:22.012 49682.530 - 49932.190: 100.0000% ( 4) 00:36:22.012 00:36:22.012 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:36:22.012 ============================================================================== 00:36:22.012 Range in us Cumulative IO count 00:36:22.012 10111.269 - 10173.684: 0.0984% ( 8) 00:36:22.012 10173.684 - 10236.099: 0.2092% ( 9) 00:36:22.012 10236.099 - 10298.514: 0.3199% ( 9) 00:36:22.012 10298.514 - 10360.930: 0.4921% ( 14) 00:36:22.012 10360.930 - 10423.345: 0.7382% ( 20) 00:36:22.012 10423.345 - 10485.760: 1.0212% ( 23) 00:36:22.012 10485.760 - 10548.175: 1.4887% ( 38) 00:36:22.012 10548.175 - 10610.590: 1.9193% ( 35) 00:36:22.012 10610.590 - 10673.006: 2.4114% ( 40) 00:36:22.012 10673.006 - 10735.421: 2.9651% ( 45) 00:36:22.012 10735.421 - 10797.836: 3.6294% ( 54) 00:36:22.012 10797.836 - 10860.251: 4.2446% ( 50) 00:36:22.012 10860.251 - 10922.667: 4.8351% ( 48) 00:36:22.012 10922.667 - 10985.082: 5.4134% ( 47) 00:36:22.012 10985.082 - 11047.497: 6.2992% ( 72) 00:36:22.012 11047.497 - 11109.912: 7.2958% ( 81) 00:36:22.012 11109.912 - 11172.328: 8.5630% ( 103) 00:36:22.012 11172.328 - 11234.743: 9.8425% ( 104) 00:36:22.012 11234.743 - 11297.158: 11.4050% ( 127) 00:36:22.012 11297.158 - 11359.573: 13.2382% ( 149) 00:36:22.012 11359.573 - 11421.989: 14.8376% ( 130) 00:36:22.012 11421.989 - 11484.404: 16.3755% ( 125) 00:36:22.012 11484.404 - 11546.819: 18.0856% ( 139) 00:36:22.013 11546.819 - 11609.234: 19.8204% ( 141) 00:36:22.013 11609.234 - 11671.650: 21.6166% ( 146) 00:36:22.013 11671.650 - 11734.065: 23.6344% ( 164) 00:36:22.013 11734.065 - 11796.480: 25.9843% ( 191) 00:36:22.013 11796.480 - 11858.895: 28.3711% ( 194) 00:36:22.013 11858.895 - 11921.310: 30.6348% ( 184) 00:36:22.013 11921.310 - 11983.726: 32.4680% ( 149) 00:36:22.013 11983.726 - 12046.141: 34.2397% ( 144) 00:36:22.013 12046.141 - 12108.556: 35.9006% ( 135) 00:36:22.013 12108.556 - 12170.971: 37.3893% ( 121) 00:36:22.013 12170.971 - 12233.387: 39.0502% ( 135) 00:36:22.013 12233.387 - 12295.802: 40.6004% ( 126) 00:36:22.013 12295.802 - 12358.217: 42.5320% ( 157) 00:36:22.013 12358.217 - 12420.632: 44.3529% ( 148) 00:36:22.013 12420.632 - 12483.048: 46.3583% ( 163) 00:36:22.013 12483.048 - 12545.463: 47.8469% ( 121) 00:36:22.013 12545.463 - 12607.878: 49.3356% ( 121) 00:36:22.013 12607.878 - 12670.293: 50.5290% ( 97) 00:36:22.013 12670.293 - 12732.709: 51.6363% ( 90) 00:36:22.013 12732.709 - 12795.124: 52.4606% ( 67) 00:36:22.013 12795.124 - 12857.539: 53.3465% ( 72) 00:36:22.013 12857.539 - 12919.954: 54.4168% ( 87) 00:36:22.013 12919.954 - 12982.370: 55.5241% ( 90) 00:36:22.013 12982.370 - 13044.785: 56.3361% ( 66) 00:36:22.013 13044.785 - 13107.200: 57.0989% ( 62) 00:36:22.013 13107.200 - 13169.615: 57.9232% ( 67) 00:36:22.013 13169.615 - 13232.030: 58.8091% ( 72) 00:36:22.013 13232.030 - 13294.446: 59.4242% ( 50) 00:36:22.013 13294.446 - 13356.861: 59.8302% ( 33) 00:36:22.013 13356.861 - 13419.276: 60.1255% ( 24) 00:36:22.013 13419.276 - 13481.691: 60.4331% ( 25) 00:36:22.013 13481.691 - 13544.107: 60.6668% ( 19) 00:36:22.013 13544.107 - 13606.522: 60.9252% ( 21) 00:36:22.013 13606.522 - 13668.937: 61.1467% ( 18) 00:36:22.013 13668.937 - 13731.352: 61.2820% ( 11) 00:36:22.013 13731.352 - 13793.768: 61.5773% ( 24) 00:36:22.013 13793.768 - 13856.183: 61.7495% ( 14) 00:36:22.013 13856.183 - 13918.598: 62.0940% ( 28) 00:36:22.013 13918.598 - 13981.013: 62.3401% ( 20) 00:36:22.013 13981.013 - 14043.429: 62.5492% ( 17) 00:36:22.013 14043.429 - 14105.844: 62.7830% ( 19) 00:36:22.013 14105.844 - 14168.259: 63.0044% ( 18) 00:36:22.013 14168.259 - 14230.674: 63.1644% ( 13) 00:36:22.013 14230.674 - 14293.090: 63.2874% ( 10) 00:36:22.013 14293.090 - 14355.505: 63.3735% ( 7) 00:36:22.013 14355.505 - 14417.920: 63.4473% ( 6) 00:36:22.013 14417.920 - 14480.335: 63.5212% ( 6) 00:36:22.013 14480.335 - 14542.750: 63.6073% ( 7) 00:36:22.013 14542.750 - 14605.166: 63.6934% ( 7) 00:36:22.013 14605.166 - 14667.581: 63.7549% ( 5) 00:36:22.013 14667.581 - 14729.996: 63.8903% ( 11) 00:36:22.013 14729.996 - 14792.411: 63.9518% ( 5) 00:36:22.013 14792.411 - 14854.827: 64.0010% ( 4) 00:36:22.013 14854.827 - 14917.242: 64.0379% ( 3) 00:36:22.013 14917.242 - 14979.657: 64.0748% ( 3) 00:36:22.013 14979.657 - 15042.072: 64.0994% ( 2) 00:36:22.013 15042.072 - 15104.488: 64.1855% ( 7) 00:36:22.013 15104.488 - 15166.903: 64.2717% ( 7) 00:36:22.013 15166.903 - 15229.318: 64.3209% ( 4) 00:36:22.013 15229.318 - 15291.733: 64.3701% ( 4) 00:36:22.013 15291.733 - 15354.149: 64.4193% ( 4) 00:36:22.013 15354.149 - 15416.564: 64.5300% ( 9) 00:36:22.013 15416.564 - 15478.979: 64.6161% ( 7) 00:36:22.013 15478.979 - 15541.394: 64.7269% ( 9) 00:36:22.013 15541.394 - 15603.810: 64.8253% ( 8) 00:36:22.013 15603.810 - 15666.225: 64.9606% ( 11) 00:36:22.013 15666.225 - 15728.640: 65.0344% ( 6) 00:36:22.013 15728.640 - 15791.055: 65.0837% ( 4) 00:36:22.013 15791.055 - 15853.470: 65.1452% ( 5) 00:36:22.013 15853.470 - 15915.886: 65.1698% ( 2) 00:36:22.013 15915.886 - 15978.301: 65.2067% ( 3) 00:36:22.013 15978.301 - 16103.131: 65.2682% ( 5) 00:36:22.013 16103.131 - 16227.962: 65.3297% ( 5) 00:36:22.013 16227.962 - 16352.792: 65.3543% ( 2) 00:36:22.013 17850.758 - 17975.589: 65.3666% ( 1) 00:36:22.013 17975.589 - 18100.419: 65.6373% ( 22) 00:36:22.013 18100.419 - 18225.250: 65.6742% ( 3) 00:36:22.013 18225.250 - 18350.080: 65.7111% ( 3) 00:36:22.013 18350.080 - 18474.910: 65.7357% ( 2) 00:36:22.013 18474.910 - 18599.741: 65.7726% ( 3) 00:36:22.013 18599.741 - 18724.571: 65.8095% ( 3) 00:36:22.013 18724.571 - 18849.402: 65.8588% ( 4) 00:36:22.013 18849.402 - 18974.232: 65.8834% ( 2) 00:36:22.013 18974.232 - 19099.063: 65.9203% ( 3) 00:36:22.013 19099.063 - 19223.893: 65.9572% ( 3) 00:36:22.013 19223.893 - 19348.724: 66.0064% ( 4) 00:36:22.013 19348.724 - 19473.554: 66.0556% ( 4) 00:36:22.013 19473.554 - 19598.385: 66.1294% ( 6) 00:36:22.013 19598.385 - 19723.215: 66.2032% ( 6) 00:36:22.013 19723.215 - 19848.046: 66.3509% ( 12) 00:36:22.013 19848.046 - 19972.876: 66.6093% ( 21) 00:36:22.013 19972.876 - 20097.707: 67.0276% ( 34) 00:36:22.013 20097.707 - 20222.537: 67.7165% ( 56) 00:36:22.013 20222.537 - 20347.368: 68.4055% ( 56) 00:36:22.013 20347.368 - 20472.198: 69.1068% ( 57) 00:36:22.013 20472.198 - 20597.029: 69.9926% ( 72) 00:36:22.013 20597.029 - 20721.859: 71.1368% ( 93) 00:36:22.013 20721.859 - 20846.690: 72.5640% ( 116) 00:36:22.013 20846.690 - 20971.520: 74.0650% ( 122) 00:36:22.013 20971.520 - 21096.350: 75.9350% ( 152) 00:36:22.013 21096.350 - 21221.181: 77.4483% ( 123) 00:36:22.013 21221.181 - 21346.011: 79.2077% ( 143) 00:36:22.013 21346.011 - 21470.842: 81.4469% ( 182) 00:36:22.013 21470.842 - 21595.672: 83.5999% ( 175) 00:36:22.013 21595.672 - 21720.503: 85.6545% ( 167) 00:36:22.013 21720.503 - 21845.333: 87.6353% ( 161) 00:36:22.013 21845.333 - 21970.164: 89.2101% ( 128) 00:36:22.013 21970.164 - 22094.994: 90.5020% ( 105) 00:36:22.013 22094.994 - 22219.825: 91.6585% ( 94) 00:36:22.013 22219.825 - 22344.655: 92.7781% ( 91) 00:36:22.013 22344.655 - 22469.486: 93.5531% ( 63) 00:36:22.013 22469.486 - 22594.316: 94.4267% ( 71) 00:36:22.013 22594.316 - 22719.147: 95.0295% ( 49) 00:36:22.013 22719.147 - 22843.977: 95.5217% ( 40) 00:36:22.013 22843.977 - 22968.808: 95.9892% ( 38) 00:36:22.013 22968.808 - 23093.638: 96.6781% ( 56) 00:36:22.013 23093.638 - 23218.469: 97.3548% ( 55) 00:36:22.013 23218.469 - 23343.299: 97.6870% ( 27) 00:36:22.013 23343.299 - 23468.130: 97.8839% ( 16) 00:36:22.013 23468.130 - 23592.960: 98.0315% ( 12) 00:36:22.013 23592.960 - 23717.790: 98.1545% ( 10) 00:36:22.013 23717.790 - 23842.621: 98.2653% ( 9) 00:36:22.013 23842.621 - 23967.451: 98.3268% ( 5) 00:36:22.013 23967.451 - 24092.282: 98.3883% ( 5) 00:36:22.013 24092.282 - 24217.112: 98.4129% ( 2) 00:36:22.013 24217.112 - 24341.943: 98.4252% ( 1) 00:36:22.013 31706.941 - 31831.771: 98.5482% ( 10) 00:36:22.013 31831.771 - 31956.602: 98.6590% ( 9) 00:36:22.013 31956.602 - 32206.263: 98.8189% ( 13) 00:36:22.013 32206.263 - 32455.924: 98.9050% ( 7) 00:36:22.013 32455.924 - 32705.585: 98.9542% ( 4) 00:36:22.013 32705.585 - 32955.246: 98.9911% ( 3) 00:36:22.013 32955.246 - 33204.907: 99.0404% ( 4) 00:36:22.013 33204.907 - 33454.568: 99.0650% ( 2) 00:36:22.013 33454.568 - 33704.229: 99.1019% ( 3) 00:36:22.013 33704.229 - 33953.890: 99.1388% ( 3) 00:36:22.013 33953.890 - 34203.550: 99.1757% ( 3) 00:36:22.013 34203.550 - 34453.211: 99.2126% ( 3) 00:36:22.013 41443.718 - 41693.379: 99.2249% ( 1) 00:36:22.013 41693.379 - 41943.040: 99.2987% ( 6) 00:36:22.013 43690.667 - 43940.328: 99.3110% ( 1) 00:36:22.013 43940.328 - 44189.989: 99.3602% ( 4) 00:36:22.013 44189.989 - 44439.650: 99.3971% ( 3) 00:36:22.013 44439.650 - 44689.310: 99.4464% ( 4) 00:36:22.013 44689.310 - 44938.971: 99.4956% ( 4) 00:36:22.013 44938.971 - 45188.632: 99.5448% ( 4) 00:36:22.013 45188.632 - 45438.293: 99.5940% ( 4) 00:36:22.013 45438.293 - 45687.954: 99.6555% ( 5) 00:36:22.013 45687.954 - 45937.615: 99.7047% ( 4) 00:36:22.013 45937.615 - 46187.276: 99.7662% ( 5) 00:36:22.013 46187.276 - 46436.937: 99.8155% ( 4) 00:36:22.013 46436.937 - 46686.598: 99.8647% ( 4) 00:36:22.013 46686.598 - 46936.259: 99.9139% ( 4) 00:36:22.013 46936.259 - 47185.920: 99.9631% ( 4) 00:36:22.013 47185.920 - 47435.581: 100.0000% ( 3) 00:36:22.013 00:36:22.013 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:36:22.013 ============================================================================== 00:36:22.013 Range in us Cumulative IO count 00:36:22.013 9986.438 - 10048.853: 0.0123% ( 1) 00:36:22.013 10048.853 - 10111.269: 0.0246% ( 1) 00:36:22.013 10236.099 - 10298.514: 0.0492% ( 2) 00:36:22.013 10298.514 - 10360.930: 0.2092% ( 13) 00:36:22.013 10360.930 - 10423.345: 0.4183% ( 17) 00:36:22.013 10423.345 - 10485.760: 0.9104% ( 40) 00:36:22.013 10485.760 - 10548.175: 1.3164% ( 33) 00:36:22.013 10548.175 - 10610.590: 1.7594% ( 36) 00:36:22.013 10610.590 - 10673.006: 2.2884% ( 43) 00:36:22.013 10673.006 - 10735.421: 3.0143% ( 59) 00:36:22.013 10735.421 - 10797.836: 3.5556% ( 44) 00:36:22.013 10797.836 - 10860.251: 4.3430% ( 64) 00:36:22.013 10860.251 - 10922.667: 5.0197% ( 55) 00:36:22.013 10922.667 - 10985.082: 6.0408% ( 83) 00:36:22.014 10985.082 - 11047.497: 7.1727% ( 92) 00:36:22.014 11047.497 - 11109.912: 8.2923% ( 91) 00:36:22.014 11109.912 - 11172.328: 9.3873% ( 89) 00:36:22.014 11172.328 - 11234.743: 10.5069% ( 91) 00:36:22.014 11234.743 - 11297.158: 11.6634% ( 94) 00:36:22.014 11297.158 - 11359.573: 12.9183% ( 102) 00:36:22.014 11359.573 - 11421.989: 14.2347% ( 107) 00:36:22.014 11421.989 - 11484.404: 16.1786% ( 158) 00:36:22.014 11484.404 - 11546.819: 18.3932% ( 180) 00:36:22.014 11546.819 - 11609.234: 20.6201% ( 181) 00:36:22.014 11609.234 - 11671.650: 22.9208% ( 187) 00:36:22.014 11671.650 - 11734.065: 25.1722% ( 183) 00:36:22.014 11734.065 - 11796.480: 27.4975% ( 189) 00:36:22.014 11796.480 - 11858.895: 29.3799% ( 153) 00:36:22.014 11858.895 - 11921.310: 30.8686% ( 121) 00:36:22.014 11921.310 - 11983.726: 32.4680% ( 130) 00:36:22.014 11983.726 - 12046.141: 33.9567% ( 121) 00:36:22.014 12046.141 - 12108.556: 35.5807% ( 132) 00:36:22.014 12108.556 - 12170.971: 37.4139% ( 149) 00:36:22.014 12170.971 - 12233.387: 39.1240% ( 139) 00:36:22.014 12233.387 - 12295.802: 40.9203% ( 146) 00:36:22.014 12295.802 - 12358.217: 42.8519% ( 157) 00:36:22.014 12358.217 - 12420.632: 44.4021% ( 126) 00:36:22.014 12420.632 - 12483.048: 46.1245% ( 140) 00:36:22.014 12483.048 - 12545.463: 47.6009% ( 120) 00:36:22.014 12545.463 - 12607.878: 48.8558% ( 102) 00:36:22.014 12607.878 - 12670.293: 50.1476% ( 105) 00:36:22.014 12670.293 - 12732.709: 51.0827% ( 76) 00:36:22.014 12732.709 - 12795.124: 51.8332% ( 61) 00:36:22.014 12795.124 - 12857.539: 52.6083% ( 63) 00:36:22.014 12857.539 - 12919.954: 53.5310% ( 75) 00:36:22.014 12919.954 - 12982.370: 54.4906% ( 78) 00:36:22.014 12982.370 - 13044.785: 55.5241% ( 84) 00:36:22.014 13044.785 - 13107.200: 56.4469% ( 75) 00:36:22.014 13107.200 - 13169.615: 57.3573% ( 74) 00:36:22.014 13169.615 - 13232.030: 57.9724% ( 50) 00:36:22.014 13232.030 - 13294.446: 58.3538% ( 31) 00:36:22.014 13294.446 - 13356.861: 58.9321% ( 47) 00:36:22.014 13356.861 - 13419.276: 59.3627% ( 35) 00:36:22.014 13419.276 - 13481.691: 59.8671% ( 41) 00:36:22.014 13481.691 - 13544.107: 60.1132% ( 20) 00:36:22.014 13544.107 - 13606.522: 60.4208% ( 25) 00:36:22.014 13606.522 - 13668.937: 60.8268% ( 33) 00:36:22.014 13668.937 - 13731.352: 61.2328% ( 33) 00:36:22.014 13731.352 - 13793.768: 61.7495% ( 42) 00:36:22.014 13793.768 - 13856.183: 62.2293% ( 39) 00:36:22.014 13856.183 - 13918.598: 62.5246% ( 24) 00:36:22.014 13918.598 - 13981.013: 62.7584% ( 19) 00:36:22.014 13981.013 - 14043.429: 63.1029% ( 28) 00:36:22.014 14043.429 - 14105.844: 63.2136% ( 9) 00:36:22.014 14105.844 - 14168.259: 63.2874% ( 6) 00:36:22.014 14168.259 - 14230.674: 63.3735% ( 7) 00:36:22.014 14230.674 - 14293.090: 63.4473% ( 6) 00:36:22.014 14293.090 - 14355.505: 63.5089% ( 5) 00:36:22.014 14355.505 - 14417.920: 63.5704% ( 5) 00:36:22.014 14417.920 - 14480.335: 63.5950% ( 2) 00:36:22.014 14480.335 - 14542.750: 63.6319% ( 3) 00:36:22.014 14542.750 - 14605.166: 63.6811% ( 4) 00:36:22.014 14605.166 - 14667.581: 63.7180% ( 3) 00:36:22.014 14667.581 - 14729.996: 63.8041% ( 7) 00:36:22.014 14729.996 - 14792.411: 63.8533% ( 4) 00:36:22.014 14792.411 - 14854.827: 63.9026% ( 4) 00:36:22.014 14854.827 - 14917.242: 63.9641% ( 5) 00:36:22.014 14917.242 - 14979.657: 64.0256% ( 5) 00:36:22.014 14979.657 - 15042.072: 64.0871% ( 5) 00:36:22.014 15042.072 - 15104.488: 64.1363% ( 4) 00:36:22.014 15104.488 - 15166.903: 64.2470% ( 9) 00:36:22.014 15166.903 - 15229.318: 64.3455% ( 8) 00:36:22.014 15229.318 - 15291.733: 64.4931% ( 12) 00:36:22.014 15291.733 - 15354.149: 64.6531% ( 13) 00:36:22.014 15354.149 - 15416.564: 64.8376% ( 15) 00:36:22.014 15416.564 - 15478.979: 64.9729% ( 11) 00:36:22.014 15478.979 - 15541.394: 64.9975% ( 2) 00:36:22.014 15541.394 - 15603.810: 65.0344% ( 3) 00:36:22.014 15603.810 - 15666.225: 65.0837% ( 4) 00:36:22.014 15666.225 - 15728.640: 65.1329% ( 4) 00:36:22.014 15728.640 - 15791.055: 65.1698% ( 3) 00:36:22.014 15791.055 - 15853.470: 65.2067% ( 3) 00:36:22.014 15853.470 - 15915.886: 65.2313% ( 2) 00:36:22.014 15915.886 - 15978.301: 65.2559% ( 2) 00:36:22.014 15978.301 - 16103.131: 65.3543% ( 8) 00:36:22.014 17476.267 - 17601.097: 65.6496% ( 24) 00:36:22.014 17601.097 - 17725.928: 65.7234% ( 6) 00:36:22.014 17725.928 - 17850.758: 65.7603% ( 3) 00:36:22.014 17850.758 - 17975.589: 65.7972% ( 3) 00:36:22.014 17975.589 - 18100.419: 65.8342% ( 3) 00:36:22.014 18100.419 - 18225.250: 65.8834% ( 4) 00:36:22.014 18225.250 - 18350.080: 65.9203% ( 3) 00:36:22.014 18350.080 - 18474.910: 65.9572% ( 3) 00:36:22.014 18474.910 - 18599.741: 66.0064% ( 4) 00:36:22.014 18599.741 - 18724.571: 66.0433% ( 3) 00:36:22.014 18724.571 - 18849.402: 66.0802% ( 3) 00:36:22.014 18849.402 - 18974.232: 66.1171% ( 3) 00:36:22.014 18974.232 - 19099.063: 66.1417% ( 2) 00:36:22.014 19598.385 - 19723.215: 66.2032% ( 5) 00:36:22.014 19723.215 - 19848.046: 66.3263% ( 10) 00:36:22.014 19848.046 - 19972.876: 66.4985% ( 14) 00:36:22.014 19972.876 - 20097.707: 67.0030% ( 41) 00:36:22.014 20097.707 - 20222.537: 67.7165% ( 58) 00:36:22.014 20222.537 - 20347.368: 68.4424% ( 59) 00:36:22.014 20347.368 - 20472.198: 69.2052% ( 62) 00:36:22.014 20472.198 - 20597.029: 70.3494% ( 93) 00:36:22.014 20597.029 - 20721.859: 71.6412% ( 105) 00:36:22.014 20721.859 - 20846.690: 73.0930% ( 118) 00:36:22.014 20846.690 - 20971.520: 74.5202% ( 116) 00:36:22.014 20971.520 - 21096.350: 75.9473% ( 116) 00:36:22.014 21096.350 - 21221.181: 77.7805% ( 149) 00:36:22.014 21221.181 - 21346.011: 79.5891% ( 147) 00:36:22.014 21346.011 - 21470.842: 81.5330% ( 158) 00:36:22.014 21470.842 - 21595.672: 83.6983% ( 176) 00:36:22.014 21595.672 - 21720.503: 85.5315% ( 149) 00:36:22.014 21720.503 - 21845.333: 87.5492% ( 164) 00:36:22.014 21845.333 - 21970.164: 88.9641% ( 115) 00:36:22.014 21970.164 - 22094.994: 90.0344% ( 87) 00:36:22.014 22094.994 - 22219.825: 91.3755% ( 109) 00:36:22.014 22219.825 - 22344.655: 92.3967% ( 83) 00:36:22.014 22344.655 - 22469.486: 93.1594% ( 62) 00:36:22.014 22469.486 - 22594.316: 93.9961% ( 68) 00:36:22.014 22594.316 - 22719.147: 94.7096% ( 58) 00:36:22.014 22719.147 - 22843.977: 95.3248% ( 50) 00:36:22.014 22843.977 - 22968.808: 96.0015% ( 55) 00:36:22.014 22968.808 - 23093.638: 96.6905% ( 56) 00:36:22.014 23093.638 - 23218.469: 97.2933% ( 49) 00:36:22.014 23218.469 - 23343.299: 97.7854% ( 40) 00:36:22.014 23343.299 - 23468.130: 97.9454% ( 13) 00:36:22.014 23468.130 - 23592.960: 98.0684% ( 10) 00:36:22.014 23592.960 - 23717.790: 98.2037% ( 11) 00:36:22.014 23717.790 - 23842.621: 98.3391% ( 11) 00:36:22.014 23842.621 - 23967.451: 98.3883% ( 4) 00:36:22.014 23967.451 - 24092.282: 98.4252% ( 3) 00:36:22.014 27837.196 - 27962.027: 98.4375% ( 1) 00:36:22.014 28461.349 - 28586.179: 98.4744% ( 3) 00:36:22.014 28586.179 - 28711.010: 98.6836% ( 17) 00:36:22.014 28711.010 - 28835.840: 98.6959% ( 1) 00:36:22.014 28960.670 - 29085.501: 98.7082% ( 1) 00:36:22.014 29085.501 - 29210.331: 98.7205% ( 1) 00:36:22.014 29210.331 - 29335.162: 98.7451% ( 2) 00:36:22.014 29335.162 - 29459.992: 98.7574% ( 1) 00:36:22.014 29459.992 - 29584.823: 98.7697% ( 1) 00:36:22.014 29584.823 - 29709.653: 98.7943% ( 2) 00:36:22.014 29709.653 - 29834.484: 98.8189% ( 2) 00:36:22.014 29834.484 - 29959.314: 98.8435% ( 2) 00:36:22.014 29959.314 - 30084.145: 98.8681% ( 2) 00:36:22.014 30084.145 - 30208.975: 98.8927% ( 2) 00:36:22.014 30208.975 - 30333.806: 98.9173% ( 2) 00:36:22.014 30333.806 - 30458.636: 98.9419% ( 2) 00:36:22.014 30458.636 - 30583.467: 98.9665% ( 2) 00:36:22.014 30583.467 - 30708.297: 98.9911% ( 2) 00:36:22.014 30708.297 - 30833.128: 99.0157% ( 2) 00:36:22.014 30833.128 - 30957.958: 99.0527% ( 3) 00:36:22.014 30957.958 - 31082.789: 99.0773% ( 2) 00:36:22.014 31082.789 - 31207.619: 99.1019% ( 2) 00:36:22.014 31207.619 - 31332.450: 99.1388% ( 3) 00:36:22.014 31332.450 - 31457.280: 99.1511% ( 1) 00:36:22.014 31457.280 - 31582.110: 99.1757% ( 2) 00:36:22.014 31582.110 - 31706.941: 99.2003% ( 2) 00:36:22.014 31706.941 - 31831.771: 99.2126% ( 1) 00:36:22.014 40944.396 - 41194.057: 99.2618% ( 4) 00:36:22.014 41194.057 - 41443.718: 99.3110% ( 4) 00:36:22.014 41443.718 - 41693.379: 99.3602% ( 4) 00:36:22.014 41693.379 - 41943.040: 99.4094% ( 4) 00:36:22.014 41943.040 - 42192.701: 99.4710% ( 5) 00:36:22.014 42192.701 - 42442.362: 99.5202% ( 4) 00:36:22.014 42442.362 - 42692.023: 99.5694% ( 4) 00:36:22.014 42692.023 - 42941.684: 99.6309% ( 5) 00:36:22.014 42941.684 - 43191.345: 99.6678% ( 3) 00:36:22.014 43191.345 - 43441.006: 99.7293% ( 5) 00:36:22.014 43441.006 - 43690.667: 99.7785% ( 4) 00:36:22.014 43690.667 - 43940.328: 99.8278% ( 4) 00:36:22.014 43940.328 - 44189.989: 99.8893% ( 5) 00:36:22.014 44189.989 - 44439.650: 99.9385% ( 4) 00:36:22.014 44439.650 - 44689.310: 99.9877% ( 4) 00:36:22.014 44689.310 - 44938.971: 100.0000% ( 1) 00:36:22.014 00:36:22.014 09:05:24 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:36:22.014 00:36:22.014 real 0m2.866s 00:36:22.014 user 0m2.350s 00:36:22.014 sys 0m0.368s 00:36:22.014 09:05:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:22.014 09:05:24 -- common/autotest_common.sh@10 -- # set +x 00:36:22.014 ************************************ 00:36:22.014 END TEST nvme_perf 00:36:22.014 ************************************ 00:36:22.014 09:05:24 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:36:22.014 09:05:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:36:22.014 09:05:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:22.014 09:05:24 -- common/autotest_common.sh@10 -- # set +x 00:36:22.309 ************************************ 00:36:22.309 START TEST nvme_hello_world 00:36:22.309 ************************************ 00:36:22.309 09:05:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:36:22.567 Initializing NVMe Controllers 00:36:22.567 Attached to 0000:00:10.0 00:36:22.567 Namespace ID: 1 size: 6GB 00:36:22.567 Attached to 0000:00:11.0 00:36:22.567 Namespace ID: 1 size: 5GB 00:36:22.567 Attached to 0000:00:13.0 00:36:22.567 Namespace ID: 1 size: 1GB 00:36:22.567 Attached to 0000:00:12.0 00:36:22.567 Namespace ID: 1 size: 4GB 00:36:22.567 Namespace ID: 2 size: 4GB 00:36:22.567 Namespace ID: 3 size: 4GB 00:36:22.567 Initialization complete. 00:36:22.567 INFO: using host memory buffer for IO 00:36:22.567 Hello world! 00:36:22.567 INFO: using host memory buffer for IO 00:36:22.567 Hello world! 00:36:22.567 INFO: using host memory buffer for IO 00:36:22.567 Hello world! 00:36:22.567 INFO: using host memory buffer for IO 00:36:22.567 Hello world! 00:36:22.567 INFO: using host memory buffer for IO 00:36:22.567 Hello world! 00:36:22.567 INFO: using host memory buffer for IO 00:36:22.567 Hello world! 00:36:22.567 00:36:22.567 real 0m0.391s 00:36:22.567 user 0m0.150s 00:36:22.567 sys 0m0.193s 00:36:22.567 09:05:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:22.567 09:05:24 -- common/autotest_common.sh@10 -- # set +x 00:36:22.567 ************************************ 00:36:22.567 END TEST nvme_hello_world 00:36:22.567 ************************************ 00:36:22.567 09:05:24 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:36:22.567 09:05:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:36:22.567 09:05:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:22.567 09:05:24 -- common/autotest_common.sh@10 -- # set +x 00:36:22.826 ************************************ 00:36:22.826 START TEST nvme_sgl 00:36:22.826 ************************************ 00:36:22.827 09:05:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:36:23.086 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:36:23.086 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:36:23.086 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:36:23.086 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:36:23.086 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:36:23.086 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:36:23.086 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:36:23.086 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:36:23.086 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:36:23.086 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:36:23.086 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:36:23.086 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:36:23.086 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:36:23.086 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:36:23.086 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:36:23.086 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:36:23.086 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:36:23.086 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:36:23.086 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:36:23.086 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:36:23.086 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:36:23.086 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:36:23.086 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:36:23.086 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:36:23.086 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:36:23.086 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:36:23.086 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:36:23.086 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:36:23.086 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:36:23.086 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:36:23.086 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:36:23.086 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:36:23.086 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:36:23.086 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:36:23.086 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:36:23.086 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:36:23.086 NVMe Readv/Writev Request test 00:36:23.086 Attached to 0000:00:10.0 00:36:23.086 Attached to 0000:00:11.0 00:36:23.086 Attached to 0000:00:13.0 00:36:23.086 Attached to 0000:00:12.0 00:36:23.086 0000:00:10.0: build_io_request_2 test passed 00:36:23.086 0000:00:10.0: build_io_request_4 test passed 00:36:23.086 0000:00:10.0: build_io_request_5 test passed 00:36:23.086 0000:00:10.0: build_io_request_6 test passed 00:36:23.086 0000:00:10.0: build_io_request_7 test passed 00:36:23.086 0000:00:10.0: build_io_request_10 test passed 00:36:23.086 0000:00:11.0: build_io_request_2 test passed 00:36:23.086 0000:00:11.0: build_io_request_4 test passed 00:36:23.086 0000:00:11.0: build_io_request_5 test passed 00:36:23.086 0000:00:11.0: build_io_request_6 test passed 00:36:23.086 0000:00:11.0: build_io_request_7 test passed 00:36:23.086 0000:00:11.0: build_io_request_10 test passed 00:36:23.086 Cleaning up... 00:36:23.086 ************************************ 00:36:23.086 END TEST nvme_sgl 00:36:23.086 ************************************ 00:36:23.086 00:36:23.086 real 0m0.478s 00:36:23.086 user 0m0.248s 00:36:23.086 sys 0m0.180s 00:36:23.086 09:05:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:23.086 09:05:25 -- common/autotest_common.sh@10 -- # set +x 00:36:23.344 09:05:25 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:36:23.344 09:05:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:36:23.344 09:05:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:23.344 09:05:25 -- common/autotest_common.sh@10 -- # set +x 00:36:23.344 ************************************ 00:36:23.344 START TEST nvme_e2edp 00:36:23.344 ************************************ 00:36:23.344 09:05:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:36:23.602 NVMe Write/Read with End-to-End data protection test 00:36:23.602 Attached to 0000:00:10.0 00:36:23.602 Attached to 0000:00:11.0 00:36:23.602 Attached to 0000:00:13.0 00:36:23.602 Attached to 0000:00:12.0 00:36:23.602 Cleaning up... 00:36:23.602 00:36:23.602 real 0m0.354s 00:36:23.602 user 0m0.114s 00:36:23.602 sys 0m0.183s 00:36:23.602 09:05:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:23.602 09:05:25 -- common/autotest_common.sh@10 -- # set +x 00:36:23.602 ************************************ 00:36:23.602 END TEST nvme_e2edp 00:36:23.602 ************************************ 00:36:23.602 09:05:25 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:36:23.602 09:05:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:36:23.602 09:05:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:23.602 09:05:25 -- common/autotest_common.sh@10 -- # set +x 00:36:23.860 ************************************ 00:36:23.860 START TEST nvme_reserve 00:36:23.860 ************************************ 00:36:23.860 09:05:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:36:24.119 ===================================================== 00:36:24.119 NVMe Controller at PCI bus 0, device 16, function 0 00:36:24.119 ===================================================== 00:36:24.119 Reservations: Not Supported 00:36:24.119 ===================================================== 00:36:24.119 NVMe Controller at PCI bus 0, device 17, function 0 00:36:24.119 ===================================================== 00:36:24.119 Reservations: Not Supported 00:36:24.119 ===================================================== 00:36:24.119 NVMe Controller at PCI bus 0, device 19, function 0 00:36:24.119 ===================================================== 00:36:24.119 Reservations: Not Supported 00:36:24.119 ===================================================== 00:36:24.119 NVMe Controller at PCI bus 0, device 18, function 0 00:36:24.119 ===================================================== 00:36:24.119 Reservations: Not Supported 00:36:24.119 Reservation test passed 00:36:24.119 00:36:24.119 real 0m0.336s 00:36:24.119 user 0m0.104s 00:36:24.119 sys 0m0.172s 00:36:24.119 09:05:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:24.119 09:05:26 -- common/autotest_common.sh@10 -- # set +x 00:36:24.119 ************************************ 00:36:24.119 END TEST nvme_reserve 00:36:24.119 ************************************ 00:36:24.119 09:05:26 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:36:24.119 09:05:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:36:24.119 09:05:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:24.119 09:05:26 -- common/autotest_common.sh@10 -- # set +x 00:36:24.376 ************************************ 00:36:24.376 START TEST nvme_err_injection 00:36:24.376 ************************************ 00:36:24.376 09:05:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:36:24.634 NVMe Error Injection test 00:36:24.634 Attached to 0000:00:10.0 00:36:24.634 Attached to 0000:00:11.0 00:36:24.634 Attached to 0000:00:13.0 00:36:24.634 Attached to 0000:00:12.0 00:36:24.634 0000:00:12.0: get features failed as expected 00:36:24.634 0000:00:10.0: get features failed as expected 00:36:24.634 0000:00:11.0: get features failed as expected 00:36:24.634 0000:00:13.0: get features failed as expected 00:36:24.634 0000:00:10.0: get features successfully as expected 00:36:24.634 0000:00:11.0: get features successfully as expected 00:36:24.634 0000:00:13.0: get features successfully as expected 00:36:24.634 0000:00:12.0: get features successfully as expected 00:36:24.634 0000:00:10.0: read failed as expected 00:36:24.634 0000:00:11.0: read failed as expected 00:36:24.634 0000:00:13.0: read failed as expected 00:36:24.634 0000:00:12.0: read failed as expected 00:36:24.634 0000:00:10.0: read successfully as expected 00:36:24.634 0000:00:11.0: read successfully as expected 00:36:24.634 0000:00:13.0: read successfully as expected 00:36:24.634 0000:00:12.0: read successfully as expected 00:36:24.634 Cleaning up... 00:36:24.634 00:36:24.634 real 0m0.336s 00:36:24.634 user 0m0.134s 00:36:24.634 sys 0m0.149s 00:36:24.634 09:05:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:24.634 09:05:26 -- common/autotest_common.sh@10 -- # set +x 00:36:24.634 ************************************ 00:36:24.634 END TEST nvme_err_injection 00:36:24.634 ************************************ 00:36:24.634 09:05:26 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:36:24.634 09:05:26 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:36:24.634 09:05:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:24.634 09:05:26 -- common/autotest_common.sh@10 -- # set +x 00:36:24.634 ************************************ 00:36:24.634 START TEST nvme_overhead 00:36:24.634 ************************************ 00:36:24.634 09:05:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:36:26.049 Initializing NVMe Controllers 00:36:26.049 Attached to 0000:00:10.0 00:36:26.049 Attached to 0000:00:11.0 00:36:26.049 Attached to 0000:00:13.0 00:36:26.049 Attached to 0000:00:12.0 00:36:26.049 Initialization complete. Launching workers. 00:36:26.049 submit (in ns) avg, min, max = 18939.8, 13038.1, 764752.4 00:36:26.049 complete (in ns) avg, min, max = 12898.1, 9191.4, 888203.8 00:36:26.049 00:36:26.049 Submit histogram 00:36:26.049 ================ 00:36:26.049 Range in us Cumulative Count 00:36:26.049 12.983 - 13.044: 0.0092% ( 1) 00:36:26.049 13.166 - 13.227: 0.0185% ( 1) 00:36:26.049 13.349 - 13.410: 0.0555% ( 4) 00:36:26.049 13.531 - 13.592: 0.0647% ( 1) 00:36:26.049 13.714 - 13.775: 0.0832% ( 2) 00:36:26.049 13.836 - 13.897: 0.0925% ( 1) 00:36:26.049 14.080 - 14.141: 0.1479% ( 6) 00:36:26.049 14.141 - 14.202: 0.2312% ( 9) 00:36:26.049 14.202 - 14.263: 0.3421% ( 12) 00:36:26.049 14.263 - 14.324: 0.5455% ( 22) 00:36:26.049 14.324 - 14.385: 0.7120% ( 18) 00:36:26.049 14.385 - 14.446: 0.9246% ( 23) 00:36:26.049 14.446 - 14.507: 1.1096% ( 20) 00:36:26.049 14.507 - 14.568: 1.4055% ( 32) 00:36:26.049 14.568 - 14.629: 1.8308% ( 46) 00:36:26.049 14.629 - 14.690: 2.3486% ( 56) 00:36:26.049 14.690 - 14.750: 2.9311% ( 63) 00:36:26.049 14.750 - 14.811: 3.4951% ( 61) 00:36:26.049 14.811 - 14.872: 3.8095% ( 34) 00:36:26.049 14.872 - 14.933: 4.2071% ( 43) 00:36:26.049 14.933 - 14.994: 4.5492% ( 37) 00:36:26.049 14.994 - 15.055: 4.9376% ( 42) 00:36:26.049 15.055 - 15.116: 5.3074% ( 40) 00:36:26.049 15.116 - 15.177: 5.5479% ( 26) 00:36:26.049 15.177 - 15.238: 5.8622% ( 34) 00:36:26.049 15.238 - 15.299: 6.1581% ( 32) 00:36:26.049 15.299 - 15.360: 6.4263% ( 29) 00:36:26.049 15.360 - 15.421: 6.6112% ( 20) 00:36:26.049 15.421 - 15.482: 6.7684% ( 17) 00:36:26.049 15.482 - 15.543: 6.9810% ( 23) 00:36:26.049 15.543 - 15.604: 7.1197% ( 15) 00:36:26.049 15.604 - 15.726: 7.2954% ( 19) 00:36:26.049 15.726 - 15.848: 7.4341% ( 15) 00:36:26.049 15.848 - 15.970: 7.5266% ( 10) 00:36:26.049 15.970 - 16.091: 7.5913% ( 7) 00:36:26.049 16.091 - 16.213: 7.6468% ( 6) 00:36:26.049 16.213 - 16.335: 7.6838% ( 4) 00:36:26.049 16.335 - 16.457: 7.7947% ( 12) 00:36:26.049 16.457 - 16.579: 9.0245% ( 133) 00:36:26.049 16.579 - 16.701: 15.1086% ( 658) 00:36:26.049 16.701 - 16.823: 26.8608% ( 1271) 00:36:26.049 16.823 - 16.945: 37.1244% ( 1110) 00:36:26.049 16.945 - 17.067: 43.8280% ( 725) 00:36:26.049 17.067 - 17.189: 47.9427% ( 445) 00:36:26.049 17.189 - 17.310: 50.5502% ( 282) 00:36:26.049 17.310 - 17.432: 52.2053% ( 179) 00:36:26.049 17.432 - 17.554: 53.3148% ( 120) 00:36:26.049 17.554 - 17.676: 54.0546% ( 80) 00:36:26.049 17.676 - 17.798: 54.5446% ( 53) 00:36:26.049 17.798 - 17.920: 55.0254% ( 52) 00:36:26.049 17.920 - 18.042: 55.3028% ( 30) 00:36:26.049 18.042 - 18.164: 55.5155% ( 23) 00:36:26.049 18.164 - 18.286: 55.6542% ( 15) 00:36:26.049 18.286 - 18.408: 55.7374% ( 9) 00:36:26.049 18.408 - 18.530: 56.0888% ( 38) 00:36:26.049 18.530 - 18.651: 57.6422% ( 168) 00:36:26.049 18.651 - 18.773: 62.0897% ( 481) 00:36:26.049 18.773 - 18.895: 68.4605% ( 689) 00:36:26.049 18.895 - 19.017: 74.6463% ( 669) 00:36:26.049 19.017 - 19.139: 79.3712% ( 511) 00:36:26.049 19.139 - 19.261: 82.5428% ( 343) 00:36:26.049 19.261 - 19.383: 84.9561% ( 261) 00:36:26.049 19.383 - 19.505: 86.4540% ( 162) 00:36:26.049 19.505 - 19.627: 87.5913% ( 123) 00:36:26.049 19.627 - 19.749: 88.3957% ( 87) 00:36:26.049 19.749 - 19.870: 88.8951% ( 54) 00:36:26.049 19.870 - 19.992: 89.5238% ( 68) 00:36:26.049 19.992 - 20.114: 90.0139% ( 53) 00:36:26.049 20.114 - 20.236: 90.2173% ( 22) 00:36:26.049 20.236 - 20.358: 90.4207% ( 22) 00:36:26.049 20.358 - 20.480: 90.5594% ( 15) 00:36:26.049 20.480 - 20.602: 90.7628% ( 22) 00:36:26.049 20.602 - 20.724: 90.8923% ( 14) 00:36:26.049 20.724 - 20.846: 91.0587% ( 18) 00:36:26.049 20.846 - 20.968: 91.1419% ( 9) 00:36:26.049 20.968 - 21.090: 91.2252% ( 9) 00:36:26.049 21.090 - 21.211: 91.2714% ( 5) 00:36:26.049 21.211 - 21.333: 91.3361% ( 7) 00:36:26.049 21.333 - 21.455: 91.4193% ( 9) 00:36:26.049 21.455 - 21.577: 91.5210% ( 11) 00:36:26.049 21.577 - 21.699: 91.5673% ( 5) 00:36:26.049 21.699 - 21.821: 91.7337% ( 18) 00:36:26.049 21.821 - 21.943: 91.8077% ( 8) 00:36:26.049 21.943 - 22.065: 91.8724% ( 7) 00:36:26.049 22.065 - 22.187: 91.9834% ( 12) 00:36:26.049 22.187 - 22.309: 92.0388% ( 6) 00:36:26.049 22.309 - 22.430: 92.1036% ( 7) 00:36:26.049 22.430 - 22.552: 92.1313% ( 3) 00:36:26.049 22.552 - 22.674: 92.2053% ( 8) 00:36:26.049 22.674 - 22.796: 92.2792% ( 8) 00:36:26.049 22.796 - 22.918: 92.3440% ( 7) 00:36:26.049 22.918 - 23.040: 92.4087% ( 7) 00:36:26.049 23.040 - 23.162: 92.4549% ( 5) 00:36:26.049 23.162 - 23.284: 92.5289% ( 8) 00:36:26.049 23.284 - 23.406: 92.5566% ( 3) 00:36:26.049 23.406 - 23.528: 92.6121% ( 6) 00:36:26.049 23.528 - 23.650: 92.6491% ( 4) 00:36:26.049 23.650 - 23.771: 92.7138% ( 7) 00:36:26.049 23.771 - 23.893: 92.7601% ( 5) 00:36:26.049 23.893 - 24.015: 92.8155% ( 6) 00:36:26.049 24.015 - 24.137: 92.9172% ( 11) 00:36:26.049 24.137 - 24.259: 93.0005% ( 9) 00:36:26.049 24.259 - 24.381: 93.1207% ( 13) 00:36:26.049 24.381 - 24.503: 93.2686% ( 16) 00:36:26.049 24.503 - 24.625: 93.3981% ( 14) 00:36:26.049 24.625 - 24.747: 93.5645% ( 18) 00:36:26.049 24.747 - 24.869: 93.7124% ( 16) 00:36:26.049 24.869 - 24.990: 93.8881% ( 19) 00:36:26.049 24.990 - 25.112: 94.0176% ( 14) 00:36:26.049 25.112 - 25.234: 94.1470% ( 14) 00:36:26.049 25.234 - 25.356: 94.2487% ( 11) 00:36:26.049 25.356 - 25.478: 94.3597% ( 12) 00:36:26.049 25.478 - 25.600: 94.4521% ( 10) 00:36:26.049 25.600 - 25.722: 94.5816% ( 14) 00:36:26.049 25.722 - 25.844: 94.6463% ( 7) 00:36:26.049 25.844 - 25.966: 94.7573% ( 12) 00:36:26.049 25.966 - 26.088: 94.8313% ( 8) 00:36:26.049 26.088 - 26.210: 94.8867% ( 6) 00:36:26.049 26.210 - 26.331: 94.9792% ( 10) 00:36:26.049 26.331 - 26.453: 95.0347% ( 6) 00:36:26.049 26.453 - 26.575: 95.0994% ( 7) 00:36:26.049 26.575 - 26.697: 95.1364% ( 4) 00:36:26.049 26.697 - 26.819: 95.1826% ( 5) 00:36:26.049 26.819 - 26.941: 95.2473% ( 7) 00:36:26.049 26.941 - 27.063: 95.2751% ( 3) 00:36:26.049 27.063 - 27.185: 95.3306% ( 6) 00:36:26.049 27.185 - 27.307: 95.3675% ( 4) 00:36:26.049 27.307 - 27.429: 95.4138% ( 5) 00:36:26.049 27.429 - 27.550: 95.4508% ( 4) 00:36:26.049 27.550 - 27.672: 95.5155% ( 7) 00:36:26.049 27.672 - 27.794: 95.5617% ( 5) 00:36:26.049 27.794 - 27.916: 95.5895% ( 3) 00:36:26.049 27.916 - 28.038: 95.6727% ( 9) 00:36:26.049 28.038 - 28.160: 95.7097% ( 4) 00:36:26.049 28.160 - 28.282: 95.7559% ( 5) 00:36:26.049 28.282 - 28.404: 95.8114% ( 6) 00:36:26.049 28.404 - 28.526: 95.9316% ( 13) 00:36:26.049 28.526 - 28.648: 96.0055% ( 8) 00:36:26.049 28.648 - 28.770: 96.0610% ( 6) 00:36:26.049 28.770 - 28.891: 96.1535% ( 10) 00:36:26.049 28.891 - 29.013: 96.1997% ( 5) 00:36:26.049 29.013 - 29.135: 96.2367% ( 4) 00:36:26.049 29.135 - 29.257: 96.3014% ( 7) 00:36:26.049 29.257 - 29.379: 96.3384% ( 4) 00:36:26.049 29.379 - 29.501: 96.3569% ( 2) 00:36:26.049 29.501 - 29.623: 96.3847% ( 3) 00:36:26.049 29.623 - 29.745: 96.4679% ( 9) 00:36:26.049 29.745 - 29.867: 96.5788% ( 12) 00:36:26.049 29.867 - 29.989: 96.6066% ( 3) 00:36:26.049 29.989 - 30.110: 96.6251% ( 2) 00:36:26.049 30.110 - 30.232: 96.6805% ( 6) 00:36:26.049 30.232 - 30.354: 96.7822% ( 11) 00:36:26.049 30.354 - 30.476: 96.8377% ( 6) 00:36:26.049 30.476 - 30.598: 96.8747% ( 4) 00:36:26.049 30.598 - 30.720: 96.9394% ( 7) 00:36:26.049 30.720 - 30.842: 96.9579% ( 2) 00:36:26.049 30.842 - 30.964: 97.0042% ( 5) 00:36:26.049 30.964 - 31.086: 97.0134% ( 1) 00:36:26.049 31.086 - 31.208: 97.0874% ( 8) 00:36:26.049 31.208 - 31.451: 97.3370% ( 27) 00:36:26.049 31.451 - 31.695: 97.5220% ( 20) 00:36:26.049 31.695 - 31.939: 97.6884% ( 18) 00:36:26.049 31.939 - 32.183: 97.7809% ( 10) 00:36:26.049 32.183 - 32.427: 97.8178% ( 4) 00:36:26.049 32.427 - 32.670: 97.8548% ( 4) 00:36:26.049 32.670 - 32.914: 97.9196% ( 7) 00:36:26.049 32.914 - 33.158: 97.9658% ( 5) 00:36:26.049 33.158 - 33.402: 98.0213% ( 6) 00:36:26.049 33.402 - 33.646: 98.0490% ( 3) 00:36:26.049 33.646 - 33.890: 98.0952% ( 5) 00:36:26.049 33.890 - 34.133: 98.1137% ( 2) 00:36:26.049 34.133 - 34.377: 98.1230% ( 1) 00:36:26.049 34.377 - 34.621: 98.1322% ( 1) 00:36:26.049 34.621 - 34.865: 98.1600% ( 3) 00:36:26.049 34.865 - 35.109: 98.2062% ( 5) 00:36:26.049 35.109 - 35.352: 98.2154% ( 1) 00:36:26.049 35.352 - 35.596: 98.2432% ( 3) 00:36:26.049 35.596 - 35.840: 98.2709% ( 3) 00:36:26.049 35.840 - 36.084: 98.2987% ( 3) 00:36:26.049 36.084 - 36.328: 98.3172% ( 2) 00:36:26.049 36.328 - 36.571: 98.3541% ( 4) 00:36:26.049 36.571 - 36.815: 98.3726% ( 2) 00:36:26.049 36.815 - 37.059: 98.4189% ( 5) 00:36:26.049 37.059 - 37.303: 98.4374% ( 2) 00:36:26.050 37.303 - 37.547: 98.4836% ( 5) 00:36:26.050 37.547 - 37.790: 98.5391% ( 6) 00:36:26.050 37.790 - 38.034: 98.5668% ( 3) 00:36:26.050 38.034 - 38.278: 98.6223% ( 6) 00:36:26.050 38.278 - 38.522: 98.6593% ( 4) 00:36:26.050 38.522 - 38.766: 98.6870% ( 3) 00:36:26.050 38.766 - 39.010: 98.7332% ( 5) 00:36:26.050 39.010 - 39.253: 98.7610% ( 3) 00:36:26.050 39.253 - 39.497: 98.7702% ( 1) 00:36:26.050 39.497 - 39.741: 98.7795% ( 1) 00:36:26.050 39.741 - 39.985: 98.7887% ( 1) 00:36:26.050 39.985 - 40.229: 98.8257% ( 4) 00:36:26.050 40.229 - 40.472: 98.8534% ( 3) 00:36:26.050 40.472 - 40.716: 98.8812% ( 3) 00:36:26.050 40.716 - 40.960: 98.8904% ( 1) 00:36:26.050 40.960 - 41.204: 98.9274% ( 4) 00:36:26.050 41.204 - 41.448: 98.9459% ( 2) 00:36:26.050 41.448 - 41.691: 98.9644% ( 2) 00:36:26.050 41.691 - 41.935: 98.9736% ( 1) 00:36:26.050 41.935 - 42.179: 98.9921% ( 2) 00:36:26.050 42.179 - 42.423: 99.0199% ( 3) 00:36:26.050 42.423 - 42.667: 99.0291% ( 1) 00:36:26.050 42.667 - 42.910: 99.0384% ( 1) 00:36:26.050 43.154 - 43.398: 99.0476% ( 1) 00:36:26.050 43.398 - 43.642: 99.0569% ( 1) 00:36:26.050 43.642 - 43.886: 99.0661% ( 1) 00:36:26.050 43.886 - 44.130: 99.1031% ( 4) 00:36:26.050 44.617 - 44.861: 99.1401% ( 4) 00:36:26.050 45.105 - 45.349: 99.1586% ( 2) 00:36:26.050 45.349 - 45.592: 99.1863% ( 3) 00:36:26.050 45.592 - 45.836: 99.1956% ( 1) 00:36:26.050 45.836 - 46.080: 99.2048% ( 1) 00:36:26.050 46.080 - 46.324: 99.2233% ( 2) 00:36:26.050 46.324 - 46.568: 99.2418% ( 2) 00:36:26.050 46.568 - 46.811: 99.2510% ( 1) 00:36:26.050 46.811 - 47.055: 99.2695% ( 2) 00:36:26.050 47.299 - 47.543: 99.2788% ( 1) 00:36:26.050 47.787 - 48.030: 99.3158% ( 4) 00:36:26.050 48.030 - 48.274: 99.3343% ( 2) 00:36:26.050 48.762 - 49.006: 99.3435% ( 1) 00:36:26.050 49.250 - 49.493: 99.3805% ( 4) 00:36:26.050 49.981 - 50.225: 99.3897% ( 1) 00:36:26.050 50.712 - 50.956: 99.3990% ( 1) 00:36:26.050 50.956 - 51.200: 99.4082% ( 1) 00:36:26.050 51.200 - 51.444: 99.4267% ( 2) 00:36:26.050 51.444 - 51.688: 99.4360% ( 1) 00:36:26.050 51.688 - 51.931: 99.4545% ( 2) 00:36:26.050 51.931 - 52.175: 99.4637% ( 1) 00:36:26.050 52.419 - 52.663: 99.4730% ( 1) 00:36:26.050 52.663 - 52.907: 99.4822% ( 1) 00:36:26.050 53.150 - 53.394: 99.4914% ( 1) 00:36:26.050 53.638 - 53.882: 99.5099% ( 2) 00:36:26.050 53.882 - 54.126: 99.5192% ( 1) 00:36:26.050 54.370 - 54.613: 99.5377% ( 2) 00:36:26.050 55.101 - 55.345: 99.5469% ( 1) 00:36:26.050 55.345 - 55.589: 99.5562% ( 1) 00:36:26.050 55.589 - 55.832: 99.5654% ( 1) 00:36:26.050 56.808 - 57.051: 99.5747% ( 1) 00:36:26.050 57.051 - 57.295: 99.5839% ( 1) 00:36:26.050 58.758 - 59.002: 99.5932% ( 1) 00:36:26.050 59.246 - 59.490: 99.6024% ( 1) 00:36:26.050 59.490 - 59.733: 99.6117% ( 1) 00:36:26.050 59.733 - 59.977: 99.6209% ( 1) 00:36:26.050 60.465 - 60.709: 99.6394% ( 2) 00:36:26.050 60.952 - 61.196: 99.6486% ( 1) 00:36:26.050 61.196 - 61.440: 99.6579% ( 1) 00:36:26.050 61.684 - 61.928: 99.6671% ( 1) 00:36:26.050 62.171 - 62.415: 99.6764% ( 1) 00:36:26.050 62.415 - 62.903: 99.6949% ( 2) 00:36:26.050 62.903 - 63.390: 99.7134% ( 2) 00:36:26.050 64.366 - 64.853: 99.7226% ( 1) 00:36:26.050 65.341 - 65.829: 99.7319% ( 1) 00:36:26.050 65.829 - 66.316: 99.7596% ( 3) 00:36:26.050 66.804 - 67.291: 99.7688% ( 1) 00:36:26.050 68.754 - 69.242: 99.7781% ( 1) 00:36:26.050 69.242 - 69.730: 99.7873% ( 1) 00:36:26.050 70.217 - 70.705: 99.8058% ( 2) 00:36:26.050 71.680 - 72.168: 99.8151% ( 1) 00:36:26.050 72.168 - 72.655: 99.8243% ( 1) 00:36:26.050 72.655 - 73.143: 99.8336% ( 1) 00:36:26.050 77.044 - 77.531: 99.8428% ( 1) 00:36:26.050 79.482 - 79.970: 99.8521% ( 1) 00:36:26.050 80.457 - 80.945: 99.8613% ( 1) 00:36:26.050 80.945 - 81.432: 99.8706% ( 1) 00:36:26.050 81.920 - 82.408: 99.8798% ( 1) 00:36:26.050 85.821 - 86.309: 99.8890% ( 1) 00:36:26.050 88.259 - 88.747: 99.8983% ( 1) 00:36:26.050 92.160 - 92.648: 99.9075% ( 1) 00:36:26.050 92.648 - 93.135: 99.9168% ( 1) 00:36:26.050 94.598 - 95.086: 99.9260% ( 1) 00:36:26.050 96.061 - 96.549: 99.9353% ( 1) 00:36:26.050 124.343 - 124.830: 99.9445% ( 1) 00:36:26.050 142.385 - 143.360: 99.9538% ( 1) 00:36:26.050 144.335 - 145.310: 99.9630% ( 1) 00:36:26.050 171.642 - 172.617: 99.9723% ( 1) 00:36:26.050 218.453 - 219.429: 99.9815% ( 1) 00:36:26.050 223.330 - 224.305: 99.9908% ( 1) 00:36:26.050 764.587 - 768.488: 100.0000% ( 1) 00:36:26.050 00:36:26.050 Complete histogram 00:36:26.050 ================== 00:36:26.050 Range in us Cumulative Count 00:36:26.050 9.143 - 9.204: 0.0092% ( 1) 00:36:26.050 9.204 - 9.265: 0.2219% ( 23) 00:36:26.050 9.265 - 9.326: 0.6195% ( 43) 00:36:26.050 9.326 - 9.387: 0.8322% ( 23) 00:36:26.050 9.387 - 9.448: 0.9524% ( 13) 00:36:26.050 9.448 - 9.509: 1.1558% ( 22) 00:36:26.050 9.509 - 9.570: 1.9048% ( 81) 00:36:26.050 9.570 - 9.630: 3.2640% ( 147) 00:36:26.050 9.630 - 9.691: 4.2903% ( 111) 00:36:26.050 9.691 - 9.752: 5.0763% ( 85) 00:36:26.050 9.752 - 9.813: 5.5663% ( 53) 00:36:26.050 9.813 - 9.874: 6.0009% ( 47) 00:36:26.050 9.874 - 9.935: 6.3338% ( 36) 00:36:26.050 9.935 - 9.996: 6.7406% ( 44) 00:36:26.050 9.996 - 10.057: 6.9995% ( 28) 00:36:26.050 10.057 - 10.118: 7.2122% ( 23) 00:36:26.050 10.118 - 10.179: 7.3324% ( 13) 00:36:26.050 10.179 - 10.240: 7.4711% ( 15) 00:36:26.050 10.240 - 10.301: 7.5358% ( 7) 00:36:26.050 10.301 - 10.362: 7.6283% ( 10) 00:36:26.050 10.362 - 10.423: 7.6375% ( 1) 00:36:26.050 10.423 - 10.484: 7.6653% ( 3) 00:36:26.050 10.484 - 10.545: 7.7023% ( 4) 00:36:26.050 10.545 - 10.606: 7.7393% ( 4) 00:36:26.050 10.606 - 10.667: 7.7577% ( 2) 00:36:26.050 10.667 - 10.728: 7.8132% ( 6) 00:36:26.050 10.728 - 10.789: 7.8410% ( 3) 00:36:26.050 10.789 - 10.850: 7.8502% ( 1) 00:36:26.050 10.850 - 10.910: 7.8872% ( 4) 00:36:26.050 10.910 - 10.971: 8.1553% ( 29) 00:36:26.050 10.971 - 11.032: 11.4101% ( 352) 00:36:26.050 11.032 - 11.093: 19.8890% ( 917) 00:36:26.050 11.093 - 11.154: 31.1327% ( 1216) 00:36:26.050 11.154 - 11.215: 39.3435% ( 888) 00:36:26.050 11.215 - 11.276: 44.3366% ( 540) 00:36:26.050 11.276 - 11.337: 47.1660% ( 306) 00:36:26.050 11.337 - 11.398: 48.9968% ( 198) 00:36:26.050 11.398 - 11.459: 50.7351% ( 188) 00:36:26.050 11.459 - 11.520: 51.8262% ( 118) 00:36:26.050 11.520 - 11.581: 52.6583% ( 90) 00:36:26.050 11.581 - 11.642: 53.1299% ( 51) 00:36:26.050 11.642 - 11.703: 53.4628% ( 36) 00:36:26.050 11.703 - 11.764: 53.6847% ( 24) 00:36:26.050 11.764 - 11.825: 53.9251% ( 26) 00:36:26.050 11.825 - 11.886: 54.1100% ( 20) 00:36:26.050 11.886 - 11.947: 54.2487% ( 15) 00:36:26.050 11.947 - 12.008: 54.4429% ( 21) 00:36:26.050 12.008 - 12.069: 54.5816% ( 15) 00:36:26.050 12.069 - 12.130: 54.8867% ( 33) 00:36:26.050 12.130 - 12.190: 55.1826% ( 32) 00:36:26.050 12.190 - 12.251: 55.6542% ( 51) 00:36:26.050 12.251 - 12.312: 55.9871% ( 36) 00:36:26.050 12.312 - 12.373: 56.2737% ( 31) 00:36:26.050 12.373 - 12.434: 56.5049% ( 25) 00:36:26.050 12.434 - 12.495: 57.2076% ( 76) 00:36:26.050 12.495 - 12.556: 60.5548% ( 362) 00:36:26.050 12.556 - 12.617: 66.6019% ( 654) 00:36:26.050 12.617 - 12.678: 73.3148% ( 726) 00:36:26.050 12.678 - 12.739: 78.2709% ( 536) 00:36:26.050 12.739 - 12.800: 81.6366% ( 364) 00:36:26.050 12.800 - 12.861: 83.7725% ( 231) 00:36:26.050 12.861 - 12.922: 85.1965% ( 154) 00:36:26.050 12.922 - 12.983: 86.4355% ( 134) 00:36:26.050 12.983 - 13.044: 87.3694% ( 101) 00:36:26.050 13.044 - 13.105: 88.2293% ( 93) 00:36:26.050 13.105 - 13.166: 88.9320% ( 76) 00:36:26.050 13.166 - 13.227: 89.4868% ( 60) 00:36:26.050 13.227 - 13.288: 89.8197% ( 36) 00:36:26.050 13.288 - 13.349: 90.1711% ( 38) 00:36:26.050 13.349 - 13.410: 90.5039% ( 36) 00:36:26.050 13.410 - 13.470: 90.8276% ( 35) 00:36:26.050 13.470 - 13.531: 90.9755% ( 16) 00:36:26.050 13.531 - 13.592: 91.1419% ( 18) 00:36:26.050 13.592 - 13.653: 91.2899% ( 16) 00:36:26.050 13.653 - 13.714: 91.4471% ( 17) 00:36:26.050 13.714 - 13.775: 91.6412% ( 21) 00:36:26.050 13.775 - 13.836: 91.7892% ( 16) 00:36:26.050 13.836 - 13.897: 92.0573% ( 29) 00:36:26.050 13.897 - 13.958: 92.2053% ( 16) 00:36:26.050 13.958 - 14.019: 92.4919% ( 31) 00:36:26.050 14.019 - 14.080: 92.6861% ( 21) 00:36:26.050 14.080 - 14.141: 92.9080% ( 24) 00:36:26.050 14.141 - 14.202: 93.0837% ( 19) 00:36:26.050 14.202 - 14.263: 93.1946% ( 12) 00:36:26.050 14.263 - 14.324: 93.2779% ( 9) 00:36:26.050 14.324 - 14.385: 93.3888% ( 12) 00:36:26.050 14.385 - 14.446: 93.4443% ( 6) 00:36:26.050 14.446 - 14.507: 93.4628% ( 2) 00:36:26.050 14.507 - 14.568: 93.5090% ( 5) 00:36:26.051 14.568 - 14.629: 93.5552% ( 5) 00:36:26.051 14.629 - 14.690: 93.6015% ( 5) 00:36:26.051 14.690 - 14.750: 93.6107% ( 1) 00:36:26.051 14.750 - 14.811: 93.6292% ( 2) 00:36:26.051 14.811 - 14.872: 93.6570% ( 3) 00:36:26.051 14.872 - 14.933: 93.6662% ( 1) 00:36:26.051 14.933 - 14.994: 93.6755% ( 1) 00:36:26.051 14.994 - 15.055: 93.6847% ( 1) 00:36:26.051 15.055 - 15.116: 93.7032% ( 2) 00:36:26.051 15.177 - 15.238: 93.7402% ( 4) 00:36:26.051 15.238 - 15.299: 93.7587% ( 2) 00:36:26.051 15.299 - 15.360: 93.7679% ( 1) 00:36:26.051 15.360 - 15.421: 93.7772% ( 1) 00:36:26.051 15.482 - 15.543: 93.7957% ( 2) 00:36:26.051 15.543 - 15.604: 93.8049% ( 1) 00:36:26.051 15.604 - 15.726: 93.8511% ( 5) 00:36:26.051 15.726 - 15.848: 93.8696% ( 2) 00:36:26.051 15.848 - 15.970: 93.9066% ( 4) 00:36:26.051 15.970 - 16.091: 93.9344% ( 3) 00:36:26.051 16.091 - 16.213: 93.9621% ( 3) 00:36:26.051 16.213 - 16.335: 93.9898% ( 3) 00:36:26.051 16.335 - 16.457: 93.9991% ( 1) 00:36:26.051 16.457 - 16.579: 94.0083% ( 1) 00:36:26.051 16.579 - 16.701: 94.0361% ( 3) 00:36:26.051 16.701 - 16.823: 94.0453% ( 1) 00:36:26.051 16.823 - 16.945: 94.0638% ( 2) 00:36:26.051 16.945 - 17.067: 94.0730% ( 1) 00:36:26.051 17.067 - 17.189: 94.0915% ( 2) 00:36:26.051 17.189 - 17.310: 94.1008% ( 1) 00:36:26.051 17.310 - 17.432: 94.1285% ( 3) 00:36:26.051 17.554 - 17.676: 94.1378% ( 1) 00:36:26.051 17.676 - 17.798: 94.1748% ( 4) 00:36:26.051 17.798 - 17.920: 94.2117% ( 4) 00:36:26.051 17.920 - 18.042: 94.2210% ( 1) 00:36:26.051 18.042 - 18.164: 94.2395% ( 2) 00:36:26.051 18.164 - 18.286: 94.3042% ( 7) 00:36:26.051 18.286 - 18.408: 94.3504% ( 5) 00:36:26.051 18.408 - 18.530: 94.3782% ( 3) 00:36:26.051 18.530 - 18.651: 94.4614% ( 9) 00:36:26.051 18.651 - 18.773: 94.4799% ( 2) 00:36:26.051 18.773 - 18.895: 94.5169% ( 4) 00:36:26.051 18.895 - 19.017: 94.5446% ( 3) 00:36:26.051 19.017 - 19.139: 94.5908% ( 5) 00:36:26.051 19.139 - 19.261: 94.6833% ( 10) 00:36:26.051 19.261 - 19.383: 94.7480% ( 7) 00:36:26.051 19.383 - 19.505: 94.8035% ( 6) 00:36:26.051 19.505 - 19.627: 94.8682% ( 7) 00:36:26.051 19.627 - 19.749: 94.9145% ( 5) 00:36:26.051 19.749 - 19.870: 94.9977% ( 9) 00:36:26.051 19.870 - 19.992: 95.1086% ( 12) 00:36:26.051 19.992 - 20.114: 95.1456% ( 4) 00:36:26.051 20.114 - 20.236: 95.2381% ( 10) 00:36:26.051 20.236 - 20.358: 95.2751% ( 4) 00:36:26.051 20.358 - 20.480: 95.3398% ( 7) 00:36:26.051 20.480 - 20.602: 95.3953% ( 6) 00:36:26.051 20.602 - 20.724: 95.4693% ( 8) 00:36:26.051 20.724 - 20.846: 95.4970% ( 3) 00:36:26.051 20.846 - 20.968: 95.5247% ( 3) 00:36:26.051 20.968 - 21.090: 95.5617% ( 4) 00:36:26.051 21.090 - 21.211: 95.6080% ( 5) 00:36:26.051 21.211 - 21.333: 95.6264% ( 2) 00:36:26.051 21.333 - 21.455: 95.6912% ( 7) 00:36:26.051 21.455 - 21.577: 95.7004% ( 1) 00:36:26.051 21.577 - 21.699: 95.7929% ( 10) 00:36:26.051 21.699 - 21.821: 95.9593% ( 18) 00:36:26.051 21.821 - 21.943: 96.2644% ( 33) 00:36:26.051 21.943 - 22.065: 96.3662% ( 11) 00:36:26.051 22.065 - 22.187: 96.5418% ( 19) 00:36:26.051 22.187 - 22.309: 96.5881% ( 5) 00:36:26.051 22.309 - 22.430: 96.6343% ( 5) 00:36:26.051 22.430 - 22.552: 96.6898% ( 6) 00:36:26.051 22.552 - 22.674: 96.7730% ( 9) 00:36:26.051 22.674 - 22.796: 96.8192% ( 5) 00:36:26.051 22.796 - 22.918: 96.8747% ( 6) 00:36:26.051 22.918 - 23.040: 96.9117% ( 4) 00:36:26.051 23.040 - 23.162: 96.9857% ( 8) 00:36:26.051 23.162 - 23.284: 97.0596% ( 8) 00:36:26.051 23.284 - 23.406: 97.1521% ( 10) 00:36:26.051 23.406 - 23.528: 97.1891% ( 4) 00:36:26.051 23.528 - 23.650: 97.2816% ( 10) 00:36:26.051 23.650 - 23.771: 97.3555% ( 8) 00:36:26.051 23.771 - 23.893: 97.4295% ( 8) 00:36:26.051 23.893 - 24.015: 97.4757% ( 5) 00:36:26.051 24.015 - 24.137: 97.5035% ( 3) 00:36:26.051 24.137 - 24.259: 97.5405% ( 4) 00:36:26.051 24.259 - 24.381: 97.5774% ( 4) 00:36:26.051 24.381 - 24.503: 97.6144% ( 4) 00:36:26.051 24.503 - 24.625: 97.6422% ( 3) 00:36:26.051 24.625 - 24.747: 97.6607% ( 2) 00:36:26.051 24.747 - 24.869: 97.6884% ( 3) 00:36:26.051 24.869 - 24.990: 97.7346% ( 5) 00:36:26.051 24.990 - 25.112: 97.7531% ( 2) 00:36:26.051 25.112 - 25.234: 97.7624% ( 1) 00:36:26.051 25.234 - 25.356: 97.8178% ( 6) 00:36:26.051 25.356 - 25.478: 97.8271% ( 1) 00:36:26.051 25.600 - 25.722: 97.8826% ( 6) 00:36:26.051 25.722 - 25.844: 97.9380% ( 6) 00:36:26.051 25.844 - 25.966: 97.9473% ( 1) 00:36:26.051 26.088 - 26.210: 98.0028% ( 6) 00:36:26.051 26.210 - 26.331: 98.0583% ( 6) 00:36:26.051 26.331 - 26.453: 98.0860% ( 3) 00:36:26.051 26.453 - 26.575: 98.1137% ( 3) 00:36:26.051 26.575 - 26.697: 98.1415% ( 3) 00:36:26.051 26.697 - 26.819: 98.1692% ( 3) 00:36:26.051 26.819 - 26.941: 98.2062% ( 4) 00:36:26.051 27.185 - 27.307: 98.2432% ( 4) 00:36:26.051 27.307 - 27.429: 98.2524% ( 1) 00:36:26.051 27.429 - 27.550: 98.2709% ( 2) 00:36:26.051 27.550 - 27.672: 98.2894% ( 2) 00:36:26.051 27.672 - 27.794: 98.3172% ( 3) 00:36:26.051 27.916 - 28.038: 98.3356% ( 2) 00:36:26.051 28.038 - 28.160: 98.3541% ( 2) 00:36:26.051 28.160 - 28.282: 98.3726% ( 2) 00:36:26.051 28.526 - 28.648: 98.3819% ( 1) 00:36:26.051 28.648 - 28.770: 98.3911% ( 1) 00:36:26.051 28.770 - 28.891: 98.4374% ( 5) 00:36:26.051 28.891 - 29.013: 98.4466% ( 1) 00:36:26.051 29.013 - 29.135: 98.4651% ( 2) 00:36:26.051 29.379 - 29.501: 98.4743% ( 1) 00:36:26.051 29.501 - 29.623: 98.5113% ( 4) 00:36:26.051 29.745 - 29.867: 98.5206% ( 1) 00:36:26.051 30.232 - 30.354: 98.5391% ( 2) 00:36:26.051 30.354 - 30.476: 98.5483% ( 1) 00:36:26.051 30.598 - 30.720: 98.5668% ( 2) 00:36:26.051 30.720 - 30.842: 98.5853% ( 2) 00:36:26.051 30.842 - 30.964: 98.6038% ( 2) 00:36:26.051 30.964 - 31.086: 98.6130% ( 1) 00:36:26.051 31.086 - 31.208: 98.6315% ( 2) 00:36:26.051 31.208 - 31.451: 98.6500% ( 2) 00:36:26.051 31.451 - 31.695: 98.6778% ( 3) 00:36:26.051 31.939 - 32.183: 98.7055% ( 3) 00:36:26.051 32.183 - 32.427: 98.7425% ( 4) 00:36:26.051 32.427 - 32.670: 98.7517% ( 1) 00:36:26.051 32.670 - 32.914: 98.7702% ( 2) 00:36:26.051 33.158 - 33.402: 98.7795% ( 1) 00:36:26.051 33.402 - 33.646: 98.7980% ( 2) 00:36:26.051 33.646 - 33.890: 98.8072% ( 1) 00:36:26.051 33.890 - 34.133: 98.8350% ( 3) 00:36:26.051 34.133 - 34.377: 98.8442% ( 1) 00:36:26.051 34.377 - 34.621: 98.8534% ( 1) 00:36:26.051 34.621 - 34.865: 98.8904% ( 4) 00:36:26.051 34.865 - 35.109: 98.9274% ( 4) 00:36:26.051 35.109 - 35.352: 98.9552% ( 3) 00:36:26.051 35.352 - 35.596: 98.9644% ( 1) 00:36:26.051 35.596 - 35.840: 98.9829% ( 2) 00:36:26.051 35.840 - 36.084: 99.0014% ( 2) 00:36:26.051 36.571 - 36.815: 99.0106% ( 1) 00:36:26.051 36.815 - 37.059: 99.0199% ( 1) 00:36:26.051 37.059 - 37.303: 99.0291% ( 1) 00:36:26.051 37.303 - 37.547: 99.0384% ( 1) 00:36:26.051 37.547 - 37.790: 99.0476% ( 1) 00:36:26.051 37.790 - 38.034: 99.0569% ( 1) 00:36:26.051 38.034 - 38.278: 99.0661% ( 1) 00:36:26.051 38.278 - 38.522: 99.0754% ( 1) 00:36:26.051 38.766 - 39.010: 99.0939% ( 2) 00:36:26.051 39.010 - 39.253: 99.1031% ( 1) 00:36:26.051 39.253 - 39.497: 99.1123% ( 1) 00:36:26.051 39.741 - 39.985: 99.1493% ( 4) 00:36:26.051 40.472 - 40.716: 99.1678% ( 2) 00:36:26.051 40.960 - 41.204: 99.1956% ( 3) 00:36:26.051 41.448 - 41.691: 99.2048% ( 1) 00:36:26.051 41.935 - 42.179: 99.2233% ( 2) 00:36:26.051 42.423 - 42.667: 99.2418% ( 2) 00:36:26.051 42.667 - 42.910: 99.2510% ( 1) 00:36:26.051 42.910 - 43.154: 99.2603% ( 1) 00:36:26.051 43.398 - 43.642: 99.2695% ( 1) 00:36:26.051 43.642 - 43.886: 99.2788% ( 1) 00:36:26.051 44.130 - 44.373: 99.2880% ( 1) 00:36:26.051 44.617 - 44.861: 99.3065% ( 2) 00:36:26.051 44.861 - 45.105: 99.3250% ( 2) 00:36:26.051 45.105 - 45.349: 99.3620% ( 4) 00:36:26.051 45.592 - 45.836: 99.3805% ( 2) 00:36:26.051 46.080 - 46.324: 99.4082% ( 3) 00:36:26.051 46.324 - 46.568: 99.4175% ( 1) 00:36:26.051 46.811 - 47.055: 99.4267% ( 1) 00:36:26.051 47.299 - 47.543: 99.4360% ( 1) 00:36:26.051 48.030 - 48.274: 99.4452% ( 1) 00:36:26.051 48.762 - 49.006: 99.4637% ( 2) 00:36:26.051 49.250 - 49.493: 99.4730% ( 1) 00:36:26.051 49.493 - 49.737: 99.4822% ( 1) 00:36:26.051 50.712 - 50.956: 99.4914% ( 1) 00:36:26.051 51.688 - 51.931: 99.5007% ( 1) 00:36:26.051 52.175 - 52.419: 99.5099% ( 1) 00:36:26.051 52.419 - 52.663: 99.5284% ( 2) 00:36:26.051 52.907 - 53.150: 99.5377% ( 1) 00:36:26.051 54.126 - 54.370: 99.5562% ( 2) 00:36:26.051 54.613 - 54.857: 99.5839% ( 3) 00:36:26.051 55.589 - 55.832: 99.5932% ( 1) 00:36:26.052 56.076 - 56.320: 99.6024% ( 1) 00:36:26.052 56.808 - 57.051: 99.6209% ( 2) 00:36:26.052 57.295 - 57.539: 99.6301% ( 1) 00:36:26.052 58.514 - 58.758: 99.6394% ( 1) 00:36:26.052 58.758 - 59.002: 99.6579% ( 2) 00:36:26.052 59.490 - 59.733: 99.6671% ( 1) 00:36:26.052 59.977 - 60.221: 99.6764% ( 1) 00:36:26.052 60.709 - 60.952: 99.6856% ( 1) 00:36:26.052 62.415 - 62.903: 99.7041% ( 2) 00:36:26.052 62.903 - 63.390: 99.7226% ( 2) 00:36:26.052 63.390 - 63.878: 99.7319% ( 1) 00:36:26.052 63.878 - 64.366: 99.7503% ( 2) 00:36:26.052 64.853 - 65.341: 99.7781% ( 3) 00:36:26.052 65.341 - 65.829: 99.7873% ( 1) 00:36:26.052 66.804 - 67.291: 99.7966% ( 1) 00:36:26.052 67.291 - 67.779: 99.8151% ( 2) 00:36:26.052 67.779 - 68.267: 99.8243% ( 1) 00:36:26.052 69.242 - 69.730: 99.8521% ( 3) 00:36:26.052 69.730 - 70.217: 99.8706% ( 2) 00:36:26.052 71.192 - 71.680: 99.8798% ( 1) 00:36:26.052 71.680 - 72.168: 99.8890% ( 1) 00:36:26.052 73.630 - 74.118: 99.8983% ( 1) 00:36:26.052 76.556 - 77.044: 99.9075% ( 1) 00:36:26.052 92.160 - 92.648: 99.9168% ( 1) 00:36:26.052 141.410 - 142.385: 99.9260% ( 1) 00:36:26.052 152.137 - 153.112: 99.9353% ( 1) 00:36:26.052 154.088 - 155.063: 99.9445% ( 1) 00:36:26.052 156.038 - 157.013: 99.9538% ( 1) 00:36:26.052 163.840 - 164.815: 99.9723% ( 2) 00:36:26.052 169.691 - 170.667: 99.9815% ( 1) 00:36:26.052 176.518 - 177.493: 99.9908% ( 1) 00:36:26.052 885.516 - 889.417: 100.0000% ( 1) 00:36:26.052 00:36:26.052 ************************************ 00:36:26.052 END TEST nvme_overhead 00:36:26.052 ************************************ 00:36:26.052 00:36:26.052 real 0m1.377s 00:36:26.052 user 0m1.112s 00:36:26.052 sys 0m0.200s 00:36:26.052 09:05:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:26.052 09:05:28 -- common/autotest_common.sh@10 -- # set +x 00:36:26.052 09:05:28 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:36:26.052 09:05:28 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:36:26.052 09:05:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:26.052 09:05:28 -- common/autotest_common.sh@10 -- # set +x 00:36:26.310 ************************************ 00:36:26.310 START TEST nvme_arbitration 00:36:26.310 ************************************ 00:36:26.310 09:05:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:36:30.493 Initializing NVMe Controllers 00:36:30.494 Attached to 0000:00:10.0 00:36:30.494 Attached to 0000:00:11.0 00:36:30.494 Attached to 0000:00:13.0 00:36:30.494 Attached to 0000:00:12.0 00:36:30.494 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:36:30.494 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:36:30.494 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:36:30.494 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:36:30.494 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:36:30.494 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:36:30.494 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:36:30.494 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:36:30.494 Initialization complete. Launching workers. 00:36:30.494 Starting thread on core 1 with urgent priority queue 00:36:30.494 Starting thread on core 2 with urgent priority queue 00:36:30.494 Starting thread on core 3 with urgent priority queue 00:36:30.494 Starting thread on core 0 with urgent priority queue 00:36:30.494 QEMU NVMe Ctrl (12340 ) core 0: 490.67 IO/s 203.80 secs/100000 ios 00:36:30.494 QEMU NVMe Ctrl (12342 ) core 0: 490.67 IO/s 203.80 secs/100000 ios 00:36:30.494 QEMU NVMe Ctrl (12341 ) core 1: 469.33 IO/s 213.07 secs/100000 ios 00:36:30.494 QEMU NVMe Ctrl (12342 ) core 1: 469.33 IO/s 213.07 secs/100000 ios 00:36:30.494 QEMU NVMe Ctrl (12343 ) core 2: 448.00 IO/s 223.21 secs/100000 ios 00:36:30.494 QEMU NVMe Ctrl (12342 ) core 3: 469.33 IO/s 213.07 secs/100000 ios 00:36:30.494 ======================================================== 00:36:30.494 00:36:30.494 ************************************ 00:36:30.494 END TEST nvme_arbitration 00:36:30.494 ************************************ 00:36:30.494 00:36:30.494 real 0m3.621s 00:36:30.494 user 0m9.695s 00:36:30.494 sys 0m0.179s 00:36:30.494 09:05:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:30.494 09:05:31 -- common/autotest_common.sh@10 -- # set +x 00:36:30.494 09:05:31 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:36:30.494 09:05:31 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:36:30.494 09:05:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:30.494 09:05:31 -- common/autotest_common.sh@10 -- # set +x 00:36:30.494 ************************************ 00:36:30.494 START TEST nvme_single_aen 00:36:30.494 ************************************ 00:36:30.494 09:05:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:36:30.494 Asynchronous Event Request test 00:36:30.494 Attached to 0000:00:10.0 00:36:30.494 Attached to 0000:00:11.0 00:36:30.494 Attached to 0000:00:13.0 00:36:30.494 Attached to 0000:00:12.0 00:36:30.494 Reset controller to setup AER completions for this process 00:36:30.494 Registering asynchronous event callbacks... 00:36:30.494 Getting orig temperature thresholds of all controllers 00:36:30.494 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:36:30.494 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:36:30.494 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:36:30.494 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:36:30.494 Setting all controllers temperature threshold low to trigger AER 00:36:30.494 Waiting for all controllers temperature threshold to be set lower 00:36:30.494 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:36:30.494 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:36:30.494 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:36:30.494 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:36:30.494 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:36:30.494 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:36:30.494 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:36:30.494 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:36:30.494 Waiting for all controllers to trigger AER and reset threshold 00:36:30.494 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:36:30.494 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:36:30.494 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:36:30.494 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:36:30.494 Cleaning up... 00:36:30.494 00:36:30.494 real 0m0.310s 00:36:30.494 user 0m0.095s 00:36:30.494 sys 0m0.159s 00:36:30.494 ************************************ 00:36:30.494 END TEST nvme_single_aen 00:36:30.494 ************************************ 00:36:30.494 09:05:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:30.494 09:05:32 -- common/autotest_common.sh@10 -- # set +x 00:36:30.494 09:05:32 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:36:30.494 09:05:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:36:30.494 09:05:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:30.494 09:05:32 -- common/autotest_common.sh@10 -- # set +x 00:36:30.494 ************************************ 00:36:30.494 START TEST nvme_doorbell_aers 00:36:30.494 ************************************ 00:36:30.494 09:05:32 -- common/autotest_common.sh@1111 -- # nvme_doorbell_aers 00:36:30.494 09:05:32 -- nvme/nvme.sh@70 -- # bdfs=() 00:36:30.494 09:05:32 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:36:30.494 09:05:32 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:36:30.494 09:05:32 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:36:30.494 09:05:32 -- common/autotest_common.sh@1499 -- # bdfs=() 00:36:30.494 09:05:32 -- common/autotest_common.sh@1499 -- # local bdfs 00:36:30.494 09:05:32 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:30.494 09:05:32 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:36:30.494 09:05:32 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:36:30.494 09:05:32 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:36:30.494 09:05:32 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:36:30.494 09:05:32 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:36:30.494 09:05:32 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:36:30.752 [2024-04-18 09:05:32.771654] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70312) is not found. Dropping the request. 00:36:40.719 Executing: test_write_invalid_db 00:36:40.719 Waiting for AER completion... 00:36:40.719 Failure: test_write_invalid_db 00:36:40.719 00:36:40.719 Executing: test_invalid_db_write_overflow_sq 00:36:40.719 Waiting for AER completion... 00:36:40.719 Failure: test_invalid_db_write_overflow_sq 00:36:40.719 00:36:40.719 Executing: test_invalid_db_write_overflow_cq 00:36:40.719 Waiting for AER completion... 00:36:40.719 Failure: test_invalid_db_write_overflow_cq 00:36:40.719 00:36:40.719 09:05:42 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:36:40.719 09:05:42 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:36:40.977 [2024-04-18 09:05:42.889071] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70312) is not found. Dropping the request. 00:36:50.998 Executing: test_write_invalid_db 00:36:50.998 Waiting for AER completion... 00:36:50.998 Failure: test_write_invalid_db 00:36:50.998 00:36:50.998 Executing: test_invalid_db_write_overflow_sq 00:36:50.998 Waiting for AER completion... 00:36:50.998 Failure: test_invalid_db_write_overflow_sq 00:36:50.998 00:36:50.998 Executing: test_invalid_db_write_overflow_cq 00:36:50.998 Waiting for AER completion... 00:36:50.998 Failure: test_invalid_db_write_overflow_cq 00:36:50.998 00:36:50.998 09:05:52 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:36:50.998 09:05:52 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:36:50.998 [2024-04-18 09:05:52.931395] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70312) is not found. Dropping the request. 00:37:00.968 Executing: test_write_invalid_db 00:37:00.968 Waiting for AER completion... 00:37:00.968 Failure: test_write_invalid_db 00:37:00.968 00:37:00.969 Executing: test_invalid_db_write_overflow_sq 00:37:00.969 Waiting for AER completion... 00:37:00.969 Failure: test_invalid_db_write_overflow_sq 00:37:00.969 00:37:00.969 Executing: test_invalid_db_write_overflow_cq 00:37:00.969 Waiting for AER completion... 00:37:00.969 Failure: test_invalid_db_write_overflow_cq 00:37:00.969 00:37:00.969 09:06:02 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:37:00.969 09:06:02 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:37:00.969 [2024-04-18 09:06:03.003885] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70312) is not found. Dropping the request. 00:37:10.981 Executing: test_write_invalid_db 00:37:10.981 Waiting for AER completion... 00:37:10.981 Failure: test_write_invalid_db 00:37:10.981 00:37:10.981 Executing: test_invalid_db_write_overflow_sq 00:37:10.981 Waiting for AER completion... 00:37:10.981 Failure: test_invalid_db_write_overflow_sq 00:37:10.981 00:37:10.981 Executing: test_invalid_db_write_overflow_cq 00:37:10.981 Waiting for AER completion... 00:37:10.981 Failure: test_invalid_db_write_overflow_cq 00:37:10.981 00:37:10.981 ************************************ 00:37:10.981 END TEST nvme_doorbell_aers 00:37:10.981 ************************************ 00:37:10.981 00:37:10.981 real 0m40.348s 00:37:10.981 user 0m29.224s 00:37:10.981 sys 0m10.646s 00:37:10.981 09:06:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:10.981 09:06:12 -- common/autotest_common.sh@10 -- # set +x 00:37:10.981 09:06:12 -- nvme/nvme.sh@97 -- # uname 00:37:10.981 09:06:12 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:37:10.981 09:06:12 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:37:10.981 09:06:12 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:37:10.981 09:06:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:37:10.981 09:06:12 -- common/autotest_common.sh@10 -- # set +x 00:37:10.981 ************************************ 00:37:10.981 START TEST nvme_multi_aen 00:37:10.981 ************************************ 00:37:10.981 09:06:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:37:11.239 [2024-04-18 09:06:13.202851] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70312) is not found. Dropping the request. 00:37:11.239 [2024-04-18 09:06:13.203240] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70312) is not found. Dropping the request. 00:37:11.239 [2024-04-18 09:06:13.203465] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70312) is not found. Dropping the request. 00:37:11.239 [2024-04-18 09:06:13.205512] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70312) is not found. Dropping the request. 00:37:11.239 [2024-04-18 09:06:13.205765] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70312) is not found. Dropping the request. 00:37:11.239 [2024-04-18 09:06:13.206005] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70312) is not found. Dropping the request. 00:37:11.239 [2024-04-18 09:06:13.207932] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70312) is not found. Dropping the request. 00:37:11.239 [2024-04-18 09:06:13.208191] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70312) is not found. Dropping the request. 00:37:11.239 [2024-04-18 09:06:13.208441] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70312) is not found. Dropping the request. 00:37:11.239 [2024-04-18 09:06:13.210331] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70312) is not found. Dropping the request. 00:37:11.239 [2024-04-18 09:06:13.210573] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70312) is not found. Dropping the request. 00:37:11.239 [2024-04-18 09:06:13.210831] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70312) is not found. Dropping the request. 00:37:11.239 Child process pid: 70832 00:37:11.498 [Child] Asynchronous Event Request test 00:37:11.498 [Child] Attached to 0000:00:10.0 00:37:11.498 [Child] Attached to 0000:00:11.0 00:37:11.498 [Child] Attached to 0000:00:13.0 00:37:11.498 [Child] Attached to 0000:00:12.0 00:37:11.498 [Child] Registering asynchronous event callbacks... 00:37:11.498 [Child] Getting orig temperature thresholds of all controllers 00:37:11.498 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:37:11.498 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:37:11.498 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:37:11.498 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:37:11.498 [Child] Waiting for all controllers to trigger AER and reset threshold 00:37:11.498 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:37:11.498 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:37:11.498 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:37:11.498 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:37:11.498 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:37:11.498 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:37:11.498 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:37:11.498 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:37:11.498 [Child] Cleaning up... 00:37:11.757 Asynchronous Event Request test 00:37:11.757 Attached to 0000:00:10.0 00:37:11.757 Attached to 0000:00:11.0 00:37:11.757 Attached to 0000:00:13.0 00:37:11.757 Attached to 0000:00:12.0 00:37:11.757 Reset controller to setup AER completions for this process 00:37:11.757 Registering asynchronous event callbacks... 00:37:11.757 Getting orig temperature thresholds of all controllers 00:37:11.757 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:37:11.757 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:37:11.757 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:37:11.757 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:37:11.757 Setting all controllers temperature threshold low to trigger AER 00:37:11.757 Waiting for all controllers temperature threshold to be set lower 00:37:11.757 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:37:11.757 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:37:11.757 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:37:11.757 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:37:11.757 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:37:11.757 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:37:11.758 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:37:11.758 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:37:11.758 Waiting for all controllers to trigger AER and reset threshold 00:37:11.758 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:37:11.758 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:37:11.758 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:37:11.758 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:37:11.758 Cleaning up... 00:37:11.758 00:37:11.758 real 0m0.757s 00:37:11.758 user 0m0.238s 00:37:11.758 sys 0m0.386s 00:37:11.758 09:06:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:11.758 09:06:13 -- common/autotest_common.sh@10 -- # set +x 00:37:11.758 ************************************ 00:37:11.758 END TEST nvme_multi_aen 00:37:11.758 ************************************ 00:37:11.758 09:06:13 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:37:11.758 09:06:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:37:11.758 09:06:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:37:11.758 09:06:13 -- common/autotest_common.sh@10 -- # set +x 00:37:11.758 ************************************ 00:37:11.758 START TEST nvme_startup 00:37:11.758 ************************************ 00:37:11.758 09:06:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:37:12.016 Initializing NVMe Controllers 00:37:12.016 Attached to 0000:00:10.0 00:37:12.016 Attached to 0000:00:11.0 00:37:12.016 Attached to 0000:00:13.0 00:37:12.016 Attached to 0000:00:12.0 00:37:12.016 Initialization complete. 00:37:12.016 Time used:236709.484 (us). 00:37:12.274 ************************************ 00:37:12.274 END TEST nvme_startup 00:37:12.274 ************************************ 00:37:12.274 00:37:12.274 real 0m0.354s 00:37:12.274 user 0m0.115s 00:37:12.274 sys 0m0.185s 00:37:12.274 09:06:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:12.274 09:06:14 -- common/autotest_common.sh@10 -- # set +x 00:37:12.274 09:06:14 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:37:12.274 09:06:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:37:12.274 09:06:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:37:12.274 09:06:14 -- common/autotest_common.sh@10 -- # set +x 00:37:12.274 ************************************ 00:37:12.274 START TEST nvme_multi_secondary 00:37:12.274 ************************************ 00:37:12.274 09:06:14 -- common/autotest_common.sh@1111 -- # nvme_multi_secondary 00:37:12.274 09:06:14 -- nvme/nvme.sh@52 -- # pid0=70906 00:37:12.274 09:06:14 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:37:12.274 09:06:14 -- nvme/nvme.sh@54 -- # pid1=70907 00:37:12.274 09:06:14 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:37:12.274 09:06:14 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:37:16.478 Initializing NVMe Controllers 00:37:16.478 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:37:16.478 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:37:16.478 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:37:16.478 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:37:16.478 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:37:16.478 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:37:16.478 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:37:16.478 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:37:16.478 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:37:16.478 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:37:16.478 Initialization complete. Launching workers. 00:37:16.478 ======================================================== 00:37:16.478 Latency(us) 00:37:16.478 Device Information : IOPS MiB/s Average min max 00:37:16.478 PCIE (0000:00:10.0) NSID 1 from core 2: 2002.86 7.82 7986.58 1234.43 28534.69 00:37:16.478 PCIE (0000:00:11.0) NSID 1 from core 2: 2002.86 7.82 7988.47 1282.86 28603.23 00:37:16.478 PCIE (0000:00:13.0) NSID 1 from core 2: 2002.86 7.82 7988.35 1294.83 28140.94 00:37:16.478 PCIE (0000:00:12.0) NSID 1 from core 2: 2002.86 7.82 7988.38 1296.43 28270.13 00:37:16.478 PCIE (0000:00:12.0) NSID 2 from core 2: 2002.86 7.82 7988.66 1268.64 28478.36 00:37:16.478 PCIE (0000:00:12.0) NSID 3 from core 2: 2008.17 7.84 7968.95 1292.30 28238.80 00:37:16.478 ======================================================== 00:37:16.478 Total : 12022.48 46.96 7984.89 1234.43 28603.23 00:37:16.478 00:37:16.478 09:06:17 -- nvme/nvme.sh@56 -- # wait 70906 00:37:16.478 Initializing NVMe Controllers 00:37:16.478 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:37:16.478 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:37:16.478 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:37:16.478 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:37:16.478 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:37:16.478 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:37:16.478 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:37:16.478 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:37:16.478 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:37:16.478 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:37:16.478 Initialization complete. Launching workers. 00:37:16.478 ======================================================== 00:37:16.478 Latency(us) 00:37:16.478 Device Information : IOPS MiB/s Average min max 00:37:16.478 PCIE (0000:00:10.0) NSID 1 from core 1: 4399.51 17.19 3634.43 1317.07 12822.30 00:37:16.478 PCIE (0000:00:11.0) NSID 1 from core 1: 4399.51 17.19 3635.71 1333.86 13217.44 00:37:16.478 PCIE (0000:00:13.0) NSID 1 from core 1: 4399.51 17.19 3635.33 1318.30 13730.74 00:37:16.478 PCIE (0000:00:12.0) NSID 1 from core 1: 4399.51 17.19 3634.86 1305.88 13748.76 00:37:16.478 PCIE (0000:00:12.0) NSID 2 from core 1: 4399.51 17.19 3634.48 1343.37 12656.05 00:37:16.478 PCIE (0000:00:12.0) NSID 3 from core 1: 4399.51 17.19 3634.16 1393.58 12561.40 00:37:16.478 ======================================================== 00:37:16.478 Total : 26397.09 103.11 3634.83 1305.88 13748.76 00:37:16.478 00:37:17.854 Initializing NVMe Controllers 00:37:17.854 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:37:17.854 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:37:17.854 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:37:17.854 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:37:17.854 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:37:17.854 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:37:17.854 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:37:17.854 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:37:17.854 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:37:17.854 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:37:17.854 Initialization complete. Launching workers. 00:37:17.854 ======================================================== 00:37:17.854 Latency(us) 00:37:17.854 Device Information : IOPS MiB/s Average min max 00:37:17.854 PCIE (0000:00:10.0) NSID 1 from core 0: 6357.55 24.83 2514.87 982.06 15580.98 00:37:17.854 PCIE (0000:00:11.0) NSID 1 from core 0: 6357.55 24.83 2516.17 1006.00 15505.83 00:37:17.854 PCIE (0000:00:13.0) NSID 1 from core 0: 6357.55 24.83 2516.15 1010.99 15538.20 00:37:17.854 PCIE (0000:00:12.0) NSID 1 from core 0: 6357.55 24.83 2516.13 1024.46 14738.14 00:37:17.854 PCIE (0000:00:12.0) NSID 2 from core 0: 6357.55 24.83 2516.09 1036.38 16088.67 00:37:17.854 PCIE (0000:00:12.0) NSID 3 from core 0: 6357.55 24.83 2516.06 1016.54 15924.96 00:37:17.854 ======================================================== 00:37:17.854 Total : 38145.27 149.00 2515.91 982.06 16088.67 00:37:17.854 00:37:17.854 09:06:19 -- nvme/nvme.sh@57 -- # wait 70907 00:37:17.854 09:06:19 -- nvme/nvme.sh@61 -- # pid0=70972 00:37:17.854 09:06:19 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:37:17.854 09:06:19 -- nvme/nvme.sh@63 -- # pid1=70973 00:37:17.854 09:06:19 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:37:17.854 09:06:19 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:37:21.140 Initializing NVMe Controllers 00:37:21.140 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:37:21.140 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:37:21.140 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:37:21.140 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:37:21.140 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:37:21.140 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:37:21.140 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:37:21.140 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:37:21.140 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:37:21.140 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:37:21.140 Initialization complete. Launching workers. 00:37:21.140 ======================================================== 00:37:21.140 Latency(us) 00:37:21.140 Device Information : IOPS MiB/s Average min max 00:37:21.140 PCIE (0000:00:10.0) NSID 1 from core 1: 5135.47 20.06 3113.60 1198.53 8928.89 00:37:21.140 PCIE (0000:00:11.0) NSID 1 from core 1: 5135.47 20.06 3115.04 1207.48 8557.94 00:37:21.140 PCIE (0000:00:13.0) NSID 1 from core 1: 5135.47 20.06 3115.09 1211.95 9973.34 00:37:21.140 PCIE (0000:00:12.0) NSID 1 from core 1: 5135.47 20.06 3115.13 1203.28 10550.68 00:37:21.140 PCIE (0000:00:12.0) NSID 2 from core 1: 5135.47 20.06 3115.16 1197.12 8749.07 00:37:21.140 PCIE (0000:00:12.0) NSID 3 from core 1: 5135.47 20.06 3115.08 1230.83 8990.52 00:37:21.140 ======================================================== 00:37:21.140 Total : 30812.83 120.36 3114.85 1197.12 10550.68 00:37:21.140 00:37:21.399 Initializing NVMe Controllers 00:37:21.399 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:37:21.399 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:37:21.399 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:37:21.399 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:37:21.399 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:37:21.399 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:37:21.399 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:37:21.399 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:37:21.399 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:37:21.399 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:37:21.399 Initialization complete. Launching workers. 00:37:21.399 ======================================================== 00:37:21.399 Latency(us) 00:37:21.399 Device Information : IOPS MiB/s Average min max 00:37:21.399 PCIE (0000:00:10.0) NSID 1 from core 0: 4779.73 18.67 3345.35 955.42 21293.25 00:37:21.399 PCIE (0000:00:11.0) NSID 1 from core 0: 4779.73 18.67 3347.02 996.27 20746.46 00:37:21.399 PCIE (0000:00:13.0) NSID 1 from core 0: 4779.73 18.67 3346.98 1040.11 20279.98 00:37:21.399 PCIE (0000:00:12.0) NSID 1 from core 0: 4779.73 18.67 3346.95 1040.58 19832.89 00:37:21.399 PCIE (0000:00:12.0) NSID 2 from core 0: 4779.73 18.67 3346.93 1013.18 19595.73 00:37:21.399 PCIE (0000:00:12.0) NSID 3 from core 0: 4779.73 18.67 3347.04 975.99 19378.80 00:37:21.399 ======================================================== 00:37:21.399 Total : 28678.38 112.02 3346.71 955.42 21293.25 00:37:21.399 00:37:23.303 Initializing NVMe Controllers 00:37:23.303 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:37:23.303 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:37:23.303 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:37:23.303 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:37:23.303 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:37:23.303 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:37:23.303 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:37:23.303 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:37:23.303 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:37:23.303 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:37:23.303 Initialization complete. Launching workers. 00:37:23.303 ======================================================== 00:37:23.303 Latency(us) 00:37:23.303 Device Information : IOPS MiB/s Average min max 00:37:23.303 PCIE (0000:00:10.0) NSID 1 from core 2: 2975.67 11.62 5374.42 1228.89 21163.84 00:37:23.303 PCIE (0000:00:11.0) NSID 1 from core 2: 2975.67 11.62 5376.52 1239.22 22177.62 00:37:23.303 PCIE (0000:00:13.0) NSID 1 from core 2: 2975.67 11.62 5375.90 1247.07 21737.88 00:37:23.303 PCIE (0000:00:12.0) NSID 1 from core 2: 2975.67 11.62 5375.60 1242.20 21137.57 00:37:23.303 PCIE (0000:00:12.0) NSID 2 from core 2: 2975.67 11.62 5376.43 1218.93 21032.08 00:37:23.303 PCIE (0000:00:12.0) NSID 3 from core 2: 2975.67 11.62 5376.35 1235.07 21071.13 00:37:23.303 ======================================================== 00:37:23.303 Total : 17854.05 69.74 5375.87 1218.93 22177.62 00:37:23.303 00:37:23.303 ************************************ 00:37:23.303 END TEST nvme_multi_secondary 00:37:23.303 ************************************ 00:37:23.303 09:06:25 -- nvme/nvme.sh@65 -- # wait 70972 00:37:23.303 09:06:25 -- nvme/nvme.sh@66 -- # wait 70973 00:37:23.303 00:37:23.303 real 0m11.122s 00:37:23.303 user 0m18.704s 00:37:23.303 sys 0m1.140s 00:37:23.303 09:06:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:23.303 09:06:25 -- common/autotest_common.sh@10 -- # set +x 00:37:23.562 09:06:25 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:37:23.562 09:06:25 -- nvme/nvme.sh@102 -- # kill_stub 00:37:23.562 09:06:25 -- common/autotest_common.sh@1075 -- # [[ -e /proc/69829 ]] 00:37:23.562 09:06:25 -- common/autotest_common.sh@1076 -- # kill 69829 00:37:23.562 09:06:25 -- common/autotest_common.sh@1077 -- # wait 69829 00:37:23.562 [2024-04-18 09:06:25.434793] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70831) is not found. Dropping the request. 00:37:23.562 [2024-04-18 09:06:25.435245] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70831) is not found. Dropping the request. 00:37:23.562 [2024-04-18 09:06:25.435809] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70831) is not found. Dropping the request. 00:37:23.562 [2024-04-18 09:06:25.436171] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70831) is not found. Dropping the request. 00:37:23.562 [2024-04-18 09:06:25.440717] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70831) is not found. Dropping the request. 00:37:23.562 [2024-04-18 09:06:25.440984] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70831) is not found. Dropping the request. 00:37:23.562 [2024-04-18 09:06:25.441165] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70831) is not found. Dropping the request. 00:37:23.562 [2024-04-18 09:06:25.441329] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70831) is not found. Dropping the request. 00:37:23.562 [2024-04-18 09:06:25.444039] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70831) is not found. Dropping the request. 00:37:23.562 [2024-04-18 09:06:25.444250] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70831) is not found. Dropping the request. 00:37:23.562 [2024-04-18 09:06:25.444414] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70831) is not found. Dropping the request. 00:37:23.562 [2024-04-18 09:06:25.444564] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70831) is not found. Dropping the request. 00:37:23.562 [2024-04-18 09:06:25.447405] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70831) is not found. Dropping the request. 00:37:23.562 [2024-04-18 09:06:25.447610] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70831) is not found. Dropping the request. 00:37:23.562 [2024-04-18 09:06:25.447765] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70831) is not found. Dropping the request. 00:37:23.562 [2024-04-18 09:06:25.447852] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70831) is not found. Dropping the request. 00:37:23.820 [2024-04-18 09:06:25.799605] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:37:23.820 09:06:25 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:37:23.820 09:06:25 -- common/autotest_common.sh@1083 -- # echo 2 00:37:23.820 09:06:25 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:37:23.820 09:06:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:37:23.820 09:06:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:37:23.820 09:06:25 -- common/autotest_common.sh@10 -- # set +x 00:37:23.820 ************************************ 00:37:23.820 START TEST bdev_nvme_reset_stuck_adm_cmd 00:37:23.820 ************************************ 00:37:23.820 09:06:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:37:24.078 * Looking for test storage... 00:37:24.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:37:24.078 09:06:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:37:24.078 09:06:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:37:24.078 09:06:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:37:24.078 09:06:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:37:24.078 09:06:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:37:24.078 09:06:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:37:24.078 09:06:26 -- common/autotest_common.sh@1510 -- # bdfs=() 00:37:24.078 09:06:26 -- common/autotest_common.sh@1510 -- # local bdfs 00:37:24.078 09:06:26 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:37:24.078 09:06:26 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:37:24.078 09:06:26 -- common/autotest_common.sh@1499 -- # bdfs=() 00:37:24.078 09:06:26 -- common/autotest_common.sh@1499 -- # local bdfs 00:37:24.078 09:06:26 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:24.078 09:06:26 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:37:24.078 09:06:26 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:37:24.078 09:06:26 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:37:24.078 09:06:26 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:37:24.078 09:06:26 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:37:24.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:24.078 09:06:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:37:24.078 09:06:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:37:24.078 09:06:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=71139 00:37:24.078 09:06:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:37:24.078 09:06:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:37:24.078 09:06:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 71139 00:37:24.078 09:06:26 -- common/autotest_common.sh@817 -- # '[' -z 71139 ']' 00:37:24.078 09:06:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:24.078 09:06:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:37:24.078 09:06:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:24.078 09:06:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:37:24.078 09:06:26 -- common/autotest_common.sh@10 -- # set +x 00:37:24.411 [2024-04-18 09:06:26.250257] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:37:24.411 [2024-04-18 09:06:26.250916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71139 ] 00:37:24.411 [2024-04-18 09:06:26.462476] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:24.670 [2024-04-18 09:06:26.727236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:24.670 [2024-04-18 09:06:26.727320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:24.670 [2024-04-18 09:06:26.727457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:24.670 [2024-04-18 09:06:26.727480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:26.081 09:06:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:37:26.081 09:06:27 -- common/autotest_common.sh@850 -- # return 0 00:37:26.081 09:06:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:37:26.081 09:06:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:37:26.081 09:06:27 -- common/autotest_common.sh@10 -- # set +x 00:37:26.081 nvme0n1 00:37:26.081 09:06:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:37:26.081 09:06:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:37:26.081 09:06:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_bAAIx.txt 00:37:26.081 09:06:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:37:26.081 09:06:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:37:26.081 09:06:27 -- common/autotest_common.sh@10 -- # set +x 00:37:26.081 true 00:37:26.081 09:06:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:37:26.081 09:06:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:37:26.081 09:06:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1713431187 00:37:26.081 09:06:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=71168 00:37:26.081 09:06:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:37:26.081 09:06:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:37:26.081 09:06:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:37:27.981 09:06:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:37:27.981 09:06:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:37:27.981 09:06:29 -- common/autotest_common.sh@10 -- # set +x 00:37:27.981 [2024-04-18 09:06:29.915913] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:37:27.981 [2024-04-18 09:06:29.916685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:27.981 [2024-04-18 09:06:29.916851] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:37:27.981 [2024-04-18 09:06:29.916977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:27.981 [2024-04-18 09:06:29.918868] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:27.981 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 71168 00:37:27.981 09:06:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:37:27.981 09:06:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 71168 00:37:27.981 09:06:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 71168 00:37:27.981 09:06:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:37:27.981 09:06:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:37:27.981 09:06:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:27.981 09:06:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:37:27.981 09:06:29 -- common/autotest_common.sh@10 -- # set +x 00:37:27.981 09:06:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:37:27.981 09:06:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:37:27.981 09:06:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_bAAIx.txt 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_bAAIx.txt 00:37:27.981 09:06:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 71139 00:37:27.981 09:06:30 -- common/autotest_common.sh@936 -- # '[' -z 71139 ']' 00:37:27.981 09:06:30 -- common/autotest_common.sh@940 -- # kill -0 71139 00:37:27.981 09:06:30 -- common/autotest_common.sh@941 -- # uname 00:37:27.981 09:06:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:37:27.981 09:06:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71139 00:37:27.981 killing process with pid 71139 00:37:27.981 09:06:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:37:27.981 09:06:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:37:27.981 09:06:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71139' 00:37:27.981 09:06:30 -- common/autotest_common.sh@955 -- # kill 71139 00:37:27.982 09:06:30 -- common/autotest_common.sh@960 -- # wait 71139 00:37:31.285 09:06:32 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:37:31.285 09:06:32 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:37:31.285 00:37:31.285 real 0m6.920s 00:37:31.285 user 0m23.532s 00:37:31.285 sys 0m0.787s 00:37:31.285 09:06:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:31.285 ************************************ 00:37:31.285 END TEST bdev_nvme_reset_stuck_adm_cmd 00:37:31.285 ************************************ 00:37:31.285 09:06:32 -- common/autotest_common.sh@10 -- # set +x 00:37:31.285 09:06:32 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:37:31.285 09:06:32 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:37:31.285 09:06:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:37:31.285 09:06:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:37:31.285 09:06:32 -- common/autotest_common.sh@10 -- # set +x 00:37:31.285 ************************************ 00:37:31.285 START TEST nvme_fio 00:37:31.285 ************************************ 00:37:31.285 09:06:32 -- common/autotest_common.sh@1111 -- # nvme_fio_test 00:37:31.285 09:06:32 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:37:31.285 09:06:32 -- nvme/nvme.sh@32 -- # ran_fio=false 00:37:31.285 09:06:32 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:37:31.285 09:06:32 -- common/autotest_common.sh@1499 -- # bdfs=() 00:37:31.285 09:06:32 -- common/autotest_common.sh@1499 -- # local bdfs 00:37:31.285 09:06:32 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:31.285 09:06:32 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:37:31.285 09:06:32 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:37:31.285 09:06:33 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:37:31.285 09:06:33 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:37:31.285 09:06:33 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:37:31.285 09:06:33 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:37:31.285 09:06:33 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:37:31.285 09:06:33 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:37:31.285 09:06:33 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:37:31.285 09:06:33 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:37:31.285 09:06:33 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:37:31.850 09:06:33 -- nvme/nvme.sh@41 -- # bs=4096 00:37:31.851 09:06:33 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:37:31.851 09:06:33 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:37:31.851 09:06:33 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:37:31.851 09:06:33 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:31.851 09:06:33 -- common/autotest_common.sh@1325 -- # local sanitizers 00:37:31.851 09:06:33 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:37:31.851 09:06:33 -- common/autotest_common.sh@1327 -- # shift 00:37:31.851 09:06:33 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:37:31.851 09:06:33 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:37:31.851 09:06:33 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:37:31.851 09:06:33 -- common/autotest_common.sh@1331 -- # grep libasan 00:37:31.851 09:06:33 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:37:31.851 09:06:33 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:31.851 09:06:33 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:31.851 09:06:33 -- common/autotest_common.sh@1333 -- # break 00:37:31.851 09:06:33 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:37:31.851 09:06:33 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:37:31.851 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:37:31.851 fio-3.35 00:37:31.851 Starting 1 thread 00:37:35.137 00:37:35.137 test: (groupid=0, jobs=1): err= 0: pid=71328: Thu Apr 18 09:06:37 2024 00:37:35.137 read: IOPS=17.3k, BW=67.7MiB/s (70.9MB/s)(135MiB/2001msec) 00:37:35.137 slat (usec): min=4, max=199, avg= 6.04, stdev= 2.04 00:37:35.137 clat (usec): min=311, max=11059, avg=3672.29, stdev=638.45 00:37:35.137 lat (usec): min=317, max=11066, avg=3678.33, stdev=639.27 00:37:35.137 clat percentiles (usec): 00:37:35.137 | 1.00th=[ 2343], 5.00th=[ 3097], 10.00th=[ 3195], 20.00th=[ 3261], 00:37:35.137 | 30.00th=[ 3326], 40.00th=[ 3392], 50.00th=[ 3490], 60.00th=[ 3818], 00:37:35.137 | 70.00th=[ 3949], 80.00th=[ 4047], 90.00th=[ 4228], 95.00th=[ 4424], 00:37:35.137 | 99.00th=[ 6390], 99.50th=[ 7570], 99.90th=[ 8979], 99.95th=[ 9634], 00:37:35.137 | 99.99th=[10290] 00:37:35.137 bw ( KiB/s): min=64984, max=74640, per=100.00%, avg=71197.33, stdev=5391.38, samples=3 00:37:35.137 iops : min=16246, max=18660, avg=17799.33, stdev=1347.85, samples=3 00:37:35.137 write: IOPS=17.3k, BW=67.7MiB/s (71.0MB/s)(136MiB/2001msec); 0 zone resets 00:37:35.137 slat (nsec): min=4670, max=75748, avg=6172.42, stdev=1867.18 00:37:35.137 clat (usec): min=277, max=10843, avg=3681.94, stdev=651.04 00:37:35.137 lat (usec): min=283, max=10850, avg=3688.11, stdev=651.87 00:37:35.137 clat percentiles (usec): 00:37:35.137 | 1.00th=[ 2343], 5.00th=[ 3097], 10.00th=[ 3195], 20.00th=[ 3261], 00:37:35.137 | 30.00th=[ 3326], 40.00th=[ 3392], 50.00th=[ 3490], 60.00th=[ 3818], 00:37:35.137 | 70.00th=[ 3949], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4424], 00:37:35.137 | 99.00th=[ 6456], 99.50th=[ 7570], 99.90th=[ 8979], 99.95th=[ 9765], 00:37:35.137 | 99.99th=[10421] 00:37:35.137 bw ( KiB/s): min=65368, max=74128, per=100.00%, avg=71133.33, stdev=4994.18, samples=3 00:37:35.137 iops : min=16342, max=18532, avg=17783.33, stdev=1248.55, samples=3 00:37:35.137 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:37:35.137 lat (msec) : 2=0.31%, 4=73.95%, 10=25.69%, 20=0.02% 00:37:35.137 cpu : usr=99.10%, sys=0.00%, ctx=16, majf=0, minf=605 00:37:35.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:37:35.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:35.137 issued rwts: total=34661,34688,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.137 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:35.137 00:37:35.137 Run status group 0 (all jobs): 00:37:35.137 READ: bw=67.7MiB/s (70.9MB/s), 67.7MiB/s-67.7MiB/s (70.9MB/s-70.9MB/s), io=135MiB (142MB), run=2001-2001msec 00:37:35.137 WRITE: bw=67.7MiB/s (71.0MB/s), 67.7MiB/s-67.7MiB/s (71.0MB/s-71.0MB/s), io=136MiB (142MB), run=2001-2001msec 00:37:35.395 ----------------------------------------------------- 00:37:35.395 Suppressions used: 00:37:35.395 count bytes template 00:37:35.395 1 32 /usr/src/fio/parse.c 00:37:35.395 1 8 libtcmalloc_minimal.so 00:37:35.395 ----------------------------------------------------- 00:37:35.395 00:37:35.395 09:06:37 -- nvme/nvme.sh@44 -- # ran_fio=true 00:37:35.395 09:06:37 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:37:35.395 09:06:37 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:37:35.395 09:06:37 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:37:35.702 09:06:37 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:37:35.702 09:06:37 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:37:36.271 09:06:38 -- nvme/nvme.sh@41 -- # bs=4096 00:37:36.271 09:06:38 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:37:36.271 09:06:38 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:37:36.271 09:06:38 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:37:36.271 09:06:38 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:36.271 09:06:38 -- common/autotest_common.sh@1325 -- # local sanitizers 00:37:36.271 09:06:38 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:37:36.271 09:06:38 -- common/autotest_common.sh@1327 -- # shift 00:37:36.271 09:06:38 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:37:36.271 09:06:38 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:37:36.271 09:06:38 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:37:36.271 09:06:38 -- common/autotest_common.sh@1331 -- # grep libasan 00:37:36.271 09:06:38 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:37:36.271 09:06:38 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:36.271 09:06:38 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:36.271 09:06:38 -- common/autotest_common.sh@1333 -- # break 00:37:36.271 09:06:38 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:37:36.271 09:06:38 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:37:36.271 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:37:36.271 fio-3.35 00:37:36.271 Starting 1 thread 00:37:40.517 00:37:40.517 test: (groupid=0, jobs=1): err= 0: pid=71395: Thu Apr 18 09:06:41 2024 00:37:40.517 read: IOPS=16.7k, BW=65.3MiB/s (68.4MB/s)(131MiB/2001msec) 00:37:40.517 slat (nsec): min=4340, max=86459, avg=6305.33, stdev=1806.96 00:37:40.517 clat (usec): min=239, max=10775, avg=3806.02, stdev=725.90 00:37:40.517 lat (usec): min=245, max=10782, avg=3812.33, stdev=726.73 00:37:40.517 clat percentiles (usec): 00:37:40.518 | 1.00th=[ 2147], 5.00th=[ 2999], 10.00th=[ 3097], 20.00th=[ 3261], 00:37:40.518 | 30.00th=[ 3425], 40.00th=[ 3556], 50.00th=[ 3785], 60.00th=[ 3982], 00:37:40.518 | 70.00th=[ 4080], 80.00th=[ 4228], 90.00th=[ 4555], 95.00th=[ 4817], 00:37:40.518 | 99.00th=[ 6128], 99.50th=[ 7177], 99.90th=[ 8979], 99.95th=[ 9503], 00:37:40.518 | 99.99th=[10159] 00:37:40.518 bw ( KiB/s): min=61736, max=69728, per=99.53%, avg=66512.00, stdev=4218.20, samples=3 00:37:40.518 iops : min=15434, max=17432, avg=16628.00, stdev=1054.55, samples=3 00:37:40.518 write: IOPS=16.7k, BW=65.4MiB/s (68.6MB/s)(131MiB/2001msec); 0 zone resets 00:37:40.518 slat (nsec): min=4407, max=96984, avg=6449.61, stdev=1826.70 00:37:40.518 clat (usec): min=219, max=10829, avg=3809.97, stdev=722.82 00:37:40.518 lat (usec): min=225, max=10835, avg=3816.42, stdev=723.64 00:37:40.518 clat percentiles (usec): 00:37:40.518 | 1.00th=[ 2147], 5.00th=[ 2999], 10.00th=[ 3130], 20.00th=[ 3261], 00:37:40.518 | 30.00th=[ 3425], 40.00th=[ 3556], 50.00th=[ 3785], 60.00th=[ 3982], 00:37:40.518 | 70.00th=[ 4080], 80.00th=[ 4228], 90.00th=[ 4555], 95.00th=[ 4817], 00:37:40.518 | 99.00th=[ 6063], 99.50th=[ 7177], 99.90th=[ 9110], 99.95th=[ 9634], 00:37:40.518 | 99.99th=[10421] 00:37:40.518 bw ( KiB/s): min=62120, max=69424, per=99.23%, avg=66453.33, stdev=3837.94, samples=3 00:37:40.518 iops : min=15530, max=17356, avg=16613.33, stdev=959.48, samples=3 00:37:40.518 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02% 00:37:40.518 lat (msec) : 2=0.70%, 4=62.21%, 10=37.02%, 20=0.02% 00:37:40.518 cpu : usr=99.15%, sys=0.10%, ctx=22, majf=0, minf=606 00:37:40.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:37:40.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:40.518 issued rwts: total=33430,33502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:40.518 00:37:40.518 Run status group 0 (all jobs): 00:37:40.518 READ: bw=65.3MiB/s (68.4MB/s), 65.3MiB/s-65.3MiB/s (68.4MB/s-68.4MB/s), io=131MiB (137MB), run=2001-2001msec 00:37:40.518 WRITE: bw=65.4MiB/s (68.6MB/s), 65.4MiB/s-65.4MiB/s (68.6MB/s-68.6MB/s), io=131MiB (137MB), run=2001-2001msec 00:37:40.518 ----------------------------------------------------- 00:37:40.518 Suppressions used: 00:37:40.518 count bytes template 00:37:40.518 1 32 /usr/src/fio/parse.c 00:37:40.518 1 8 libtcmalloc_minimal.so 00:37:40.518 ----------------------------------------------------- 00:37:40.518 00:37:40.518 09:06:42 -- nvme/nvme.sh@44 -- # ran_fio=true 00:37:40.518 09:06:42 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:37:40.518 09:06:42 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:37:40.518 09:06:42 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:37:40.518 09:06:42 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:37:40.518 09:06:42 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:37:40.776 09:06:42 -- nvme/nvme.sh@41 -- # bs=4096 00:37:40.776 09:06:42 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:37:40.776 09:06:42 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:37:40.776 09:06:42 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:37:40.776 09:06:42 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:40.776 09:06:42 -- common/autotest_common.sh@1325 -- # local sanitizers 00:37:40.776 09:06:42 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:37:40.776 09:06:42 -- common/autotest_common.sh@1327 -- # shift 00:37:40.776 09:06:42 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:37:40.776 09:06:42 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:37:40.776 09:06:42 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:37:40.776 09:06:42 -- common/autotest_common.sh@1331 -- # grep libasan 00:37:40.776 09:06:42 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:37:40.776 09:06:42 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:40.776 09:06:42 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:40.776 09:06:42 -- common/autotest_common.sh@1333 -- # break 00:37:40.776 09:06:42 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:37:40.776 09:06:42 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:37:41.034 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:37:41.034 fio-3.35 00:37:41.034 Starting 1 thread 00:37:44.334 00:37:44.334 test: (groupid=0, jobs=1): err= 0: pid=71456: Thu Apr 18 09:06:46 2024 00:37:44.334 read: IOPS=17.9k, BW=69.9MiB/s (73.3MB/s)(140MiB/2001msec) 00:37:44.334 slat (usec): min=4, max=112, avg= 5.91, stdev= 1.77 00:37:44.334 clat (usec): min=716, max=9335, avg=3565.09, stdev=708.14 00:37:44.334 lat (usec): min=723, max=9341, avg=3570.99, stdev=708.86 00:37:44.334 clat percentiles (usec): 00:37:44.334 | 1.00th=[ 1958], 5.00th=[ 2606], 10.00th=[ 3032], 20.00th=[ 3130], 00:37:44.334 | 30.00th=[ 3228], 40.00th=[ 3261], 50.00th=[ 3359], 60.00th=[ 3556], 00:37:44.334 | 70.00th=[ 3982], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4424], 00:37:44.334 | 99.00th=[ 6259], 99.50th=[ 6980], 99.90th=[ 8291], 99.95th=[ 8455], 00:37:44.334 | 99.99th=[ 8979] 00:37:44.334 bw ( KiB/s): min=63968, max=73224, per=96.81%, avg=69309.33, stdev=4790.09, samples=3 00:37:44.334 iops : min=15992, max=18306, avg=17327.33, stdev=1197.52, samples=3 00:37:44.334 write: IOPS=17.9k, BW=69.9MiB/s (73.3MB/s)(140MiB/2001msec); 0 zone resets 00:37:44.334 slat (usec): min=4, max=135, avg= 6.01, stdev= 1.66 00:37:44.334 clat (usec): min=712, max=9241, avg=3560.77, stdev=711.42 00:37:44.334 lat (usec): min=718, max=9246, avg=3566.78, stdev=712.10 00:37:44.334 clat percentiles (usec): 00:37:44.334 | 1.00th=[ 1909], 5.00th=[ 2606], 10.00th=[ 2999], 20.00th=[ 3130], 00:37:44.334 | 30.00th=[ 3228], 40.00th=[ 3261], 50.00th=[ 3359], 60.00th=[ 3523], 00:37:44.334 | 70.00th=[ 3982], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4424], 00:37:44.334 | 99.00th=[ 6259], 99.50th=[ 6980], 99.90th=[ 8356], 99.95th=[ 8586], 00:37:44.334 | 99.99th=[ 9110] 00:37:44.334 bw ( KiB/s): min=63800, max=73144, per=96.70%, avg=69229.33, stdev=4852.65, samples=3 00:37:44.334 iops : min=15950, max=18286, avg=17307.33, stdev=1213.16, samples=3 00:37:44.334 lat (usec) : 750=0.01%, 1000=0.02% 00:37:44.334 lat (msec) : 2=1.16%, 4=71.14%, 10=27.67% 00:37:44.334 cpu : usr=99.40%, sys=0.00%, ctx=3, majf=0, minf=605 00:37:44.334 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:37:44.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:44.334 issued rwts: total=35813,35815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:44.334 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:44.334 00:37:44.334 Run status group 0 (all jobs): 00:37:44.334 READ: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=140MiB (147MB), run=2001-2001msec 00:37:44.334 WRITE: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=140MiB (147MB), run=2001-2001msec 00:37:44.334 ----------------------------------------------------- 00:37:44.334 Suppressions used: 00:37:44.334 count bytes template 00:37:44.334 1 32 /usr/src/fio/parse.c 00:37:44.334 1 8 libtcmalloc_minimal.so 00:37:44.334 ----------------------------------------------------- 00:37:44.334 00:37:44.334 09:06:46 -- nvme/nvme.sh@44 -- # ran_fio=true 00:37:44.334 09:06:46 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:37:44.334 09:06:46 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:37:44.334 09:06:46 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:37:44.909 09:06:46 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:37:44.909 09:06:46 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:37:45.180 09:06:47 -- nvme/nvme.sh@41 -- # bs=4096 00:37:45.180 09:06:47 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:37:45.180 09:06:47 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:37:45.180 09:06:47 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:37:45.180 09:06:47 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:45.180 09:06:47 -- common/autotest_common.sh@1325 -- # local sanitizers 00:37:45.180 09:06:47 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:37:45.180 09:06:47 -- common/autotest_common.sh@1327 -- # shift 00:37:45.180 09:06:47 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:37:45.180 09:06:47 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:37:45.180 09:06:47 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:37:45.180 09:06:47 -- common/autotest_common.sh@1331 -- # grep libasan 00:37:45.180 09:06:47 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:37:45.180 09:06:47 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:45.180 09:06:47 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:45.180 09:06:47 -- common/autotest_common.sh@1333 -- # break 00:37:45.180 09:06:47 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:37:45.180 09:06:47 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:37:45.180 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:37:45.180 fio-3.35 00:37:45.180 Starting 1 thread 00:37:50.444 00:37:50.444 test: (groupid=0, jobs=1): err= 0: pid=71522: Thu Apr 18 09:06:51 2024 00:37:50.444 read: IOPS=16.4k, BW=64.0MiB/s (67.1MB/s)(128MiB/2001msec) 00:37:50.444 slat (nsec): min=4409, max=77288, avg=6258.08, stdev=2213.24 00:37:50.444 clat (usec): min=338, max=9308, avg=3882.29, stdev=911.24 00:37:50.444 lat (usec): min=345, max=9314, avg=3888.55, stdev=912.29 00:37:50.444 clat percentiles (usec): 00:37:50.444 | 1.00th=[ 2343], 5.00th=[ 2900], 10.00th=[ 3097], 20.00th=[ 3261], 00:37:50.444 | 30.00th=[ 3326], 40.00th=[ 3425], 50.00th=[ 3654], 60.00th=[ 4080], 00:37:50.444 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 5735], 00:37:50.444 | 99.00th=[ 7570], 99.50th=[ 8160], 99.90th=[ 8848], 99.95th=[ 9110], 00:37:50.444 | 99.99th=[ 9241] 00:37:50.444 bw ( KiB/s): min=55936, max=68032, per=97.00%, avg=63570.67, stdev=6643.10, samples=3 00:37:50.444 iops : min=13984, max=17008, avg=15892.67, stdev=1660.78, samples=3 00:37:50.444 write: IOPS=16.4k, BW=64.1MiB/s (67.2MB/s)(128MiB/2001msec); 0 zone resets 00:37:50.444 slat (nsec): min=4434, max=63782, avg=6545.79, stdev=2231.27 00:37:50.444 clat (usec): min=291, max=9341, avg=3890.45, stdev=913.31 00:37:50.444 lat (usec): min=299, max=9348, avg=3897.00, stdev=914.37 00:37:50.444 clat percentiles (usec): 00:37:50.444 | 1.00th=[ 2343], 5.00th=[ 2900], 10.00th=[ 3097], 20.00th=[ 3261], 00:37:50.444 | 30.00th=[ 3326], 40.00th=[ 3425], 50.00th=[ 3654], 60.00th=[ 4080], 00:37:50.444 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4686], 95.00th=[ 5735], 00:37:50.444 | 99.00th=[ 7504], 99.50th=[ 8160], 99.90th=[ 8717], 99.95th=[ 8979], 00:37:50.444 | 99.99th=[ 9241] 00:37:50.444 bw ( KiB/s): min=55456, max=67432, per=96.38%, avg=63274.67, stdev=6775.70, samples=3 00:37:50.444 iops : min=13864, max=16858, avg=15818.67, stdev=1693.93, samples=3 00:37:50.444 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:37:50.444 lat (msec) : 2=0.38%, 4=54.99%, 10=44.60% 00:37:50.444 cpu : usr=99.05%, sys=0.05%, ctx=4, majf=0, minf=603 00:37:50.444 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:37:50.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:50.444 issued rwts: total=32783,32843,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.444 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:50.444 00:37:50.444 Run status group 0 (all jobs): 00:37:50.444 READ: bw=64.0MiB/s (67.1MB/s), 64.0MiB/s-64.0MiB/s (67.1MB/s-67.1MB/s), io=128MiB (134MB), run=2001-2001msec 00:37:50.444 WRITE: bw=64.1MiB/s (67.2MB/s), 64.1MiB/s-64.1MiB/s (67.2MB/s-67.2MB/s), io=128MiB (135MB), run=2001-2001msec 00:37:50.444 ----------------------------------------------------- 00:37:50.444 Suppressions used: 00:37:50.444 count bytes template 00:37:50.444 1 32 /usr/src/fio/parse.c 00:37:50.444 1 8 libtcmalloc_minimal.so 00:37:50.444 ----------------------------------------------------- 00:37:50.444 00:37:50.444 09:06:52 -- nvme/nvme.sh@44 -- # ran_fio=true 00:37:50.444 09:06:52 -- nvme/nvme.sh@46 -- # true 00:37:50.444 00:37:50.444 real 0m19.113s 00:37:50.444 user 0m14.487s 00:37:50.444 sys 0m4.647s 00:37:50.444 09:06:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:50.444 09:06:52 -- common/autotest_common.sh@10 -- # set +x 00:37:50.444 ************************************ 00:37:50.444 END TEST nvme_fio 00:37:50.444 ************************************ 00:37:50.444 00:37:50.444 real 1m37.125s 00:37:50.444 user 3m49.730s 00:37:50.444 sys 0m23.954s 00:37:50.444 09:06:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:50.444 09:06:52 -- common/autotest_common.sh@10 -- # set +x 00:37:50.444 ************************************ 00:37:50.444 END TEST nvme 00:37:50.444 ************************************ 00:37:50.444 09:06:52 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:37:50.444 09:06:52 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:37:50.444 09:06:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:37:50.444 09:06:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:37:50.444 09:06:52 -- common/autotest_common.sh@10 -- # set +x 00:37:50.444 ************************************ 00:37:50.444 START TEST nvme_scc 00:37:50.444 ************************************ 00:37:50.444 09:06:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:37:50.444 * Looking for test storage... 00:37:50.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:37:50.444 09:06:52 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:37:50.444 09:06:52 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:37:50.444 09:06:52 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:37:50.444 09:06:52 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:37:50.444 09:06:52 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:50.444 09:06:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:50.444 09:06:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:50.444 09:06:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:50.444 09:06:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.444 09:06:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.444 09:06:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.444 09:06:52 -- paths/export.sh@5 -- # export PATH 00:37:50.444 09:06:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.444 09:06:52 -- nvme/functions.sh@10 -- # ctrls=() 00:37:50.444 09:06:52 -- nvme/functions.sh@10 -- # declare -A ctrls 00:37:50.444 09:06:52 -- nvme/functions.sh@11 -- # nvmes=() 00:37:50.444 09:06:52 -- nvme/functions.sh@11 -- # declare -A nvmes 00:37:50.444 09:06:52 -- nvme/functions.sh@12 -- # bdfs=() 00:37:50.444 09:06:52 -- nvme/functions.sh@12 -- # declare -A bdfs 00:37:50.444 09:06:52 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:37:50.444 09:06:52 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:37:50.444 09:06:52 -- nvme/functions.sh@14 -- # nvme_name= 00:37:50.444 09:06:52 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:50.445 09:06:52 -- nvme/nvme_scc.sh@12 -- # uname 00:37:50.445 09:06:52 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:37:50.445 09:06:52 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:37:50.445 09:06:52 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:37:50.703 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:50.961 Waiting for block devices as requested 00:37:50.961 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:37:51.218 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:37:51.218 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:37:51.476 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:37:56.757 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:37:56.757 09:06:58 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:37:56.757 09:06:58 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:37:56.757 09:06:58 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:37:56.757 09:06:58 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:37:56.757 09:06:58 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:37:56.757 09:06:58 -- scripts/common.sh@15 -- # local i 00:37:56.757 09:06:58 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:37:56.757 09:06:58 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:37:56.757 09:06:58 -- scripts/common.sh@24 -- # return 0 00:37:56.757 09:06:58 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:37:56.757 09:06:58 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:37:56.757 09:06:58 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@18 -- # shift 00:37:56.757 09:06:58 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.757 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.757 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.757 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:37:56.758 09:06:58 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.758 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.758 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:37:56.759 09:06:58 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:37:56.759 09:06:58 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:37:56.759 09:06:58 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:37:56.759 09:06:58 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@18 -- # shift 00:37:56.759 09:06:58 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.759 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:37:56.759 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.759 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:37:56.760 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.760 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.760 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:37:56.761 09:06:58 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:37:56.761 09:06:58 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:37:56.761 09:06:58 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:37:56.761 09:06:58 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:37:56.761 09:06:58 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:37:56.761 09:06:58 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:37:56.761 09:06:58 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:37:56.761 09:06:58 -- scripts/common.sh@15 -- # local i 00:37:56.761 09:06:58 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:37:56.761 09:06:58 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:37:56.761 09:06:58 -- scripts/common.sh@24 -- # return 0 00:37:56.761 09:06:58 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:37:56.761 09:06:58 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:37:56.761 09:06:58 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@18 -- # shift 00:37:56.761 09:06:58 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.761 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.761 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.761 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.762 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.762 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:37:56.762 09:06:58 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:37:56.763 09:06:58 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.763 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.763 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:37:56.764 09:06:58 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:37:56.764 09:06:58 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:37:56.764 09:06:58 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:37:56.764 09:06:58 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@18 -- # shift 00:37:56.764 09:06:58 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.764 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:37:56.764 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.764 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:37:56.765 09:06:58 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:37:56.765 09:06:58 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:37:56.765 09:06:58 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:37:56.765 09:06:58 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:37:56.765 09:06:58 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:37:56.765 09:06:58 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:37:56.765 09:06:58 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:37:56.765 09:06:58 -- scripts/common.sh@15 -- # local i 00:37:56.765 09:06:58 -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:37:56.765 09:06:58 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:37:56.765 09:06:58 -- scripts/common.sh@24 -- # return 0 00:37:56.765 09:06:58 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:37:56.765 09:06:58 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:37:56.765 09:06:58 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@18 -- # shift 00:37:56.765 09:06:58 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:37:56.765 09:06:58 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.765 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.765 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.766 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:37:56.766 09:06:58 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.766 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.767 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.767 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.767 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:37:56.768 09:06:58 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:37:56.768 09:06:58 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:37:56.768 09:06:58 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:37:56.768 09:06:58 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@18 -- # shift 00:37:56.768 09:06:58 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.768 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.768 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:37:56.768 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:37:56.769 09:06:58 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.769 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:37:56.769 09:06:58 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:37:56.769 09:06:58 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:37:56.769 09:06:58 -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:37:56.769 09:06:58 -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:37:56.769 09:06:58 -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:37:56.769 09:06:58 -- nvme/functions.sh@18 -- # shift 00:37:56.769 09:06:58 -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:37:56.770 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.770 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.770 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:37:56.771 09:06:58 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:37:56.771 09:06:58 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:37:56.771 09:06:58 -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:37:56.771 09:06:58 -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@18 -- # shift 00:37:56.771 09:06:58 -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:37:56.771 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.771 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.771 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:37:56.772 09:06:58 -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.772 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.772 09:06:58 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:37:56.772 09:06:58 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:37:56.772 09:06:58 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:37:56.772 09:06:58 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:37:56.772 09:06:58 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:37:56.772 09:06:58 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:37:56.772 09:06:58 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:37:56.772 09:06:58 -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:37:56.772 09:06:58 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:37:56.772 09:06:58 -- scripts/common.sh@15 -- # local i 00:37:56.772 09:06:58 -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:37:56.772 09:06:58 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:37:56.772 09:06:58 -- scripts/common.sh@24 -- # return 0 00:37:56.772 09:06:58 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:37:56.772 09:06:58 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:37:56.772 09:06:58 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@18 -- # shift 00:37:56.773 09:06:58 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.773 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:37:56.773 09:06:58 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:37:56.773 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.774 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.774 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:37:56.774 09:06:58 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:37:56.775 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.775 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.775 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:37:56.775 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:37:56.775 09:06:58 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:37:56.775 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:56.775 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:56.775 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.034 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.034 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.034 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.034 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.034 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.034 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.034 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.034 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.034 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.034 09:06:58 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.034 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.034 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.034 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.034 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:37:57.034 09:06:58 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.034 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.035 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:57.035 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:37:57.035 09:06:58 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:37:57.035 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.035 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.035 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:37:57.035 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:37:57.035 09:06:58 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:37:57.035 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.035 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.035 09:06:58 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:37:57.035 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:37:57.035 09:06:58 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:37:57.035 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.035 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.035 09:06:58 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:37:57.035 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:37:57.035 09:06:58 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:37:57.035 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.035 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.035 09:06:58 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:37:57.035 09:06:58 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:37:57.035 09:06:58 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:37:57.035 09:06:58 -- nvme/functions.sh@21 -- # IFS=: 00:37:57.035 09:06:58 -- nvme/functions.sh@21 -- # read -r reg val 00:37:57.035 09:06:58 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:37:57.035 09:06:58 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:37:57.035 09:06:58 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:37:57.035 09:06:58 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:37:57.035 09:06:58 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:37:57.035 09:06:58 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:37:57.035 09:06:58 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:37:57.035 09:06:58 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:37:57.035 09:06:58 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:37:57.035 09:06:58 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:37:57.035 09:06:58 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:37:57.035 09:06:58 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:37:57.035 09:06:58 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:37:57.035 09:06:58 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:37:57.035 09:06:58 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:37:57.035 09:06:58 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:37:57.035 09:06:58 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:37:57.035 09:06:58 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:37:57.035 09:06:58 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:37:57.035 09:06:58 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:37:57.035 09:06:58 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:37:57.035 09:06:58 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:37:57.035 09:06:58 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:37:57.035 09:06:58 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:37:57.035 09:06:58 -- nvme/functions.sh@76 -- # echo 0x15d 00:37:57.035 09:06:58 -- nvme/functions.sh@184 -- # oncs=0x15d 00:37:57.035 09:06:58 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:37:57.035 09:06:58 -- nvme/functions.sh@197 -- # echo nvme1 00:37:57.035 09:06:58 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:37:57.035 09:06:58 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:37:57.035 09:06:58 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:37:57.035 09:06:58 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:37:57.035 09:06:58 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:37:57.035 09:06:58 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:37:57.035 09:06:58 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:37:57.035 09:06:58 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:37:57.035 09:06:58 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:37:57.035 09:06:58 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:37:57.035 09:06:58 -- nvme/functions.sh@76 -- # echo 0x15d 00:37:57.035 09:06:58 -- nvme/functions.sh@184 -- # oncs=0x15d 00:37:57.035 09:06:58 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:37:57.035 09:06:58 -- nvme/functions.sh@197 -- # echo nvme0 00:37:57.035 09:06:58 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:37:57.035 09:06:58 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:37:57.035 09:06:58 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:37:57.035 09:06:58 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:37:57.035 09:06:58 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:37:57.035 09:06:58 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:37:57.035 09:06:58 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:37:57.035 09:06:58 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:37:57.035 09:06:58 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:37:57.035 09:06:58 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:37:57.035 09:06:58 -- nvme/functions.sh@76 -- # echo 0x15d 00:37:57.035 09:06:58 -- nvme/functions.sh@184 -- # oncs=0x15d 00:37:57.035 09:06:58 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:37:57.035 09:06:58 -- nvme/functions.sh@197 -- # echo nvme3 00:37:57.035 09:06:58 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:37:57.035 09:06:58 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:37:57.035 09:06:58 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:37:57.035 09:06:58 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:37:57.035 09:06:58 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:37:57.035 09:06:58 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:37:57.035 09:06:58 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:37:57.035 09:06:58 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:37:57.035 09:06:58 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:37:57.035 09:06:58 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:37:57.035 09:06:58 -- nvme/functions.sh@76 -- # echo 0x15d 00:37:57.035 09:06:58 -- nvme/functions.sh@184 -- # oncs=0x15d 00:37:57.035 09:06:58 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:37:57.035 09:06:58 -- nvme/functions.sh@197 -- # echo nvme2 00:37:57.035 09:06:58 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:37:57.035 09:06:58 -- nvme/functions.sh@206 -- # echo nvme1 00:37:57.035 09:06:58 -- nvme/functions.sh@207 -- # return 0 00:37:57.035 09:06:58 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:37:57.035 09:06:58 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:37:57.035 09:06:58 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:37:57.602 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:58.169 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:37:58.169 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:37:58.169 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:37:58.169 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:37:58.428 09:07:00 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:37:58.428 09:07:00 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:37:58.428 09:07:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:37:58.428 09:07:00 -- common/autotest_common.sh@10 -- # set +x 00:37:58.428 ************************************ 00:37:58.428 START TEST nvme_simple_copy 00:37:58.428 ************************************ 00:37:58.428 09:07:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:37:58.687 Initializing NVMe Controllers 00:37:58.687 Attaching to 0000:00:10.0 00:37:58.687 Controller supports SCC. Attached to 0000:00:10.0 00:37:58.687 Namespace ID: 1 size: 6GB 00:37:58.687 Initialization complete. 00:37:58.687 00:37:58.687 Controller QEMU NVMe Ctrl (12340 ) 00:37:58.687 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:37:58.687 Namespace Block Size:4096 00:37:58.687 Writing LBAs 0 to 63 with Random Data 00:37:58.687 Copied LBAs from 0 - 63 to the Destination LBA 256 00:37:58.687 LBAs matching Written Data: 64 00:37:58.687 00:37:58.687 real 0m0.362s 00:37:58.687 user 0m0.149s 00:37:58.687 sys 0m0.111s 00:37:58.687 09:07:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:58.687 09:07:00 -- common/autotest_common.sh@10 -- # set +x 00:37:58.687 ************************************ 00:37:58.687 END TEST nvme_simple_copy 00:37:58.687 ************************************ 00:37:58.945 00:37:58.945 real 0m8.562s 00:37:58.945 user 0m1.358s 00:37:58.945 sys 0m2.147s 00:37:58.945 09:07:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:58.945 09:07:00 -- common/autotest_common.sh@10 -- # set +x 00:37:58.945 ************************************ 00:37:58.945 END TEST nvme_scc 00:37:58.945 ************************************ 00:37:58.945 09:07:00 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:37:58.945 09:07:00 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:37:58.945 09:07:00 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:37:58.945 09:07:00 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:37:58.945 09:07:00 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:37:58.945 09:07:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:37:58.945 09:07:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:37:58.945 09:07:00 -- common/autotest_common.sh@10 -- # set +x 00:37:58.945 ************************************ 00:37:58.945 START TEST nvme_fdp 00:37:58.946 ************************************ 00:37:58.946 09:07:00 -- common/autotest_common.sh@1111 -- # test/nvme/nvme_fdp.sh 00:37:59.205 * Looking for test storage... 00:37:59.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:37:59.205 09:07:01 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:37:59.205 09:07:01 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:37:59.205 09:07:01 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:37:59.205 09:07:01 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:37:59.205 09:07:01 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:59.205 09:07:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:59.205 09:07:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:59.205 09:07:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:59.205 09:07:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.205 09:07:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.205 09:07:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.205 09:07:01 -- paths/export.sh@5 -- # export PATH 00:37:59.205 09:07:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.205 09:07:01 -- nvme/functions.sh@10 -- # ctrls=() 00:37:59.205 09:07:01 -- nvme/functions.sh@10 -- # declare -A ctrls 00:37:59.205 09:07:01 -- nvme/functions.sh@11 -- # nvmes=() 00:37:59.205 09:07:01 -- nvme/functions.sh@11 -- # declare -A nvmes 00:37:59.205 09:07:01 -- nvme/functions.sh@12 -- # bdfs=() 00:37:59.205 09:07:01 -- nvme/functions.sh@12 -- # declare -A bdfs 00:37:59.205 09:07:01 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:37:59.205 09:07:01 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:37:59.205 09:07:01 -- nvme/functions.sh@14 -- # nvme_name= 00:37:59.205 09:07:01 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:59.205 09:07:01 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:37:59.464 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:59.722 Waiting for block devices as requested 00:37:59.722 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:37:59.722 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:37:59.982 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:37:59.982 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:38:05.259 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:38:05.259 09:07:07 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:38:05.259 09:07:07 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:38:05.259 09:07:07 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:38:05.259 09:07:07 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:38:05.259 09:07:07 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:38:05.259 09:07:07 -- scripts/common.sh@15 -- # local i 00:38:05.259 09:07:07 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:38:05.259 09:07:07 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:38:05.259 09:07:07 -- scripts/common.sh@24 -- # return 0 00:38:05.259 09:07:07 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:38:05.259 09:07:07 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:38:05.259 09:07:07 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@18 -- # shift 00:38:05.259 09:07:07 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.259 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:38:05.259 09:07:07 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:38:05.259 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.260 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.260 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.260 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:38:05.261 09:07:07 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:38:05.261 09:07:07 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:38:05.261 09:07:07 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:38:05.261 09:07:07 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@18 -- # shift 00:38:05.261 09:07:07 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.261 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.261 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:38:05.261 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.262 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:05.262 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:05.262 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:38:05.263 09:07:07 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:38:05.263 09:07:07 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:38:05.263 09:07:07 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:38:05.263 09:07:07 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:38:05.263 09:07:07 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:38:05.263 09:07:07 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:38:05.263 09:07:07 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:38:05.263 09:07:07 -- scripts/common.sh@15 -- # local i 00:38:05.263 09:07:07 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:38:05.263 09:07:07 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:38:05.263 09:07:07 -- scripts/common.sh@24 -- # return 0 00:38:05.263 09:07:07 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:38:05.263 09:07:07 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:38:05.263 09:07:07 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@18 -- # shift 00:38:05.263 09:07:07 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.263 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.263 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.263 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.264 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:38:05.264 09:07:07 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.264 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.265 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.265 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:38:05.265 09:07:07 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:38:05.266 09:07:07 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:38:05.266 09:07:07 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:38:05.266 09:07:07 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:38:05.266 09:07:07 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@18 -- # shift 00:38:05.266 09:07:07 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.266 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.266 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:38:05.266 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:38:05.267 09:07:07 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:38:05.267 09:07:07 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:38:05.267 09:07:07 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:38:05.267 09:07:07 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:38:05.267 09:07:07 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:38:05.267 09:07:07 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:38:05.267 09:07:07 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:38:05.267 09:07:07 -- scripts/common.sh@15 -- # local i 00:38:05.267 09:07:07 -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:38:05.267 09:07:07 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:38:05.267 09:07:07 -- scripts/common.sh@24 -- # return 0 00:38:05.267 09:07:07 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:38:05.267 09:07:07 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:38:05.267 09:07:07 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@18 -- # shift 00:38:05.267 09:07:07 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.267 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.267 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.267 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:38:05.268 09:07:07 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:38:05.268 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.268 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.268 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:05.268 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:38:05.268 09:07:07 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:38:05.268 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.268 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.268 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.268 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:38:05.268 09:07:07 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:38:05.268 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.268 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.268 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:38:05.268 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:38:05.268 09:07:07 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:38:05.268 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.268 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.268 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.268 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:38:05.268 09:07:07 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:38:05.268 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.268 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.268 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.268 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:38:05.268 09:07:07 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:38:05.268 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.268 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.268 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:38:05.268 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:38:05.268 09:07:07 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:38:05.268 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.268 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.268 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:38:05.268 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:38:05.268 09:07:07 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:38:05.268 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.531 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.531 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.531 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:38:05.532 09:07:07 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.532 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.532 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:38:05.533 09:07:07 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:38:05.533 09:07:07 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:38:05.533 09:07:07 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:38:05.533 09:07:07 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@18 -- # shift 00:38:05.533 09:07:07 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.533 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.533 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:38:05.533 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:05.534 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.534 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.534 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:38:05.535 09:07:07 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:38:05.535 09:07:07 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:38:05.535 09:07:07 -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:38:05.535 09:07:07 -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@18 -- # shift 00:38:05.535 09:07:07 -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:38:05.535 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.535 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.535 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.536 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:05.536 09:07:07 -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:05.536 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:38:05.537 09:07:07 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:38:05.537 09:07:07 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:38:05.537 09:07:07 -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:38:05.537 09:07:07 -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@18 -- # shift 00:38:05.537 09:07:07 -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.537 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.537 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.537 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:38:05.538 09:07:07 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:38:05.538 09:07:07 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:38:05.538 09:07:07 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:38:05.538 09:07:07 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:38:05.538 09:07:07 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:38:05.538 09:07:07 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:38:05.538 09:07:07 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:38:05.538 09:07:07 -- scripts/common.sh@15 -- # local i 00:38:05.538 09:07:07 -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:38:05.538 09:07:07 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:38:05.538 09:07:07 -- scripts/common.sh@24 -- # return 0 00:38:05.538 09:07:07 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:38:05.538 09:07:07 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:38:05.538 09:07:07 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@18 -- # shift 00:38:05.538 09:07:07 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.538 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:38:05.538 09:07:07 -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.538 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.539 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:38:05.539 09:07:07 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.539 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.540 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:38:05.540 09:07:07 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.540 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.541 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.541 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:05.541 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:38:05.542 09:07:07 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:38:05.542 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.542 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.542 09:07:07 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:38:05.542 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:38:05.542 09:07:07 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:38:05.542 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.542 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.542 09:07:07 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:38:05.542 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:38:05.542 09:07:07 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:38:05.542 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.542 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.542 09:07:07 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:38:05.542 09:07:07 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:38:05.542 09:07:07 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:38:05.542 09:07:07 -- nvme/functions.sh@21 -- # IFS=: 00:38:05.542 09:07:07 -- nvme/functions.sh@21 -- # read -r reg val 00:38:05.542 09:07:07 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:38:05.542 09:07:07 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:38:05.542 09:07:07 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:38:05.542 09:07:07 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:38:05.542 09:07:07 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:38:05.542 09:07:07 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:38:05.542 09:07:07 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:38:05.542 09:07:07 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:38:05.542 09:07:07 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:38:05.542 09:07:07 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:38:05.542 09:07:07 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:38:05.542 09:07:07 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:38:05.542 09:07:07 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:38:05.542 09:07:07 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:38:05.542 09:07:07 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:38:05.542 09:07:07 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:38:05.542 09:07:07 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:38:05.542 09:07:07 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:38:05.542 09:07:07 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:38:05.542 09:07:07 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:38:05.542 09:07:07 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:38:05.542 09:07:07 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:38:05.542 09:07:07 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:38:05.542 09:07:07 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:38:05.542 09:07:07 -- nvme/functions.sh@76 -- # echo 0x8000 00:38:05.542 09:07:07 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:38:05.542 09:07:07 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:38:05.542 09:07:07 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:38:05.542 09:07:07 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:38:05.542 09:07:07 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:38:05.542 09:07:07 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:38:05.542 09:07:07 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:38:05.542 09:07:07 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:38:05.542 09:07:07 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:38:05.542 09:07:07 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:38:05.542 09:07:07 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:38:05.542 09:07:07 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:38:05.542 09:07:07 -- nvme/functions.sh@76 -- # echo 0x8000 00:38:05.542 09:07:07 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:38:05.542 09:07:07 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:38:05.542 09:07:07 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:38:05.542 09:07:07 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:38:05.542 09:07:07 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:38:05.542 09:07:07 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:38:05.542 09:07:07 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:38:05.542 09:07:07 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:38:05.542 09:07:07 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:38:05.542 09:07:07 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:38:05.542 09:07:07 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:38:05.542 09:07:07 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:38:05.542 09:07:07 -- nvme/functions.sh@76 -- # echo 0x88010 00:38:05.542 09:07:07 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:38:05.542 09:07:07 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:38:05.542 09:07:07 -- nvme/functions.sh@197 -- # echo nvme3 00:38:05.542 09:07:07 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:38:05.542 09:07:07 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:38:05.542 09:07:07 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:38:05.542 09:07:07 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:38:05.542 09:07:07 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:38:05.542 09:07:07 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:38:05.542 09:07:07 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:38:05.542 09:07:07 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:38:05.542 09:07:07 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:38:05.542 09:07:07 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:38:05.542 09:07:07 -- nvme/functions.sh@76 -- # echo 0x8000 00:38:05.542 09:07:07 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:38:05.542 09:07:07 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:38:05.542 09:07:07 -- nvme/functions.sh@204 -- # trap - ERR 00:38:05.542 09:07:07 -- nvme/functions.sh@204 -- # print_backtrace 00:38:05.542 09:07:07 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:38:05.542 09:07:07 -- common/autotest_common.sh@1139 -- # return 0 00:38:05.542 09:07:07 -- nvme/functions.sh@204 -- # trap - ERR 00:38:05.543 09:07:07 -- nvme/functions.sh@204 -- # print_backtrace 00:38:05.543 09:07:07 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:38:05.543 09:07:07 -- common/autotest_common.sh@1139 -- # return 0 00:38:05.543 09:07:07 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:38:05.543 09:07:07 -- nvme/functions.sh@206 -- # echo nvme3 00:38:05.543 09:07:07 -- nvme/functions.sh@207 -- # return 0 00:38:05.543 09:07:07 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:38:05.543 09:07:07 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:13.0 00:38:05.543 09:07:07 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:38:06.112 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:06.745 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:38:06.745 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:38:06.745 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:38:07.008 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:38:07.008 09:07:09 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:38:07.008 09:07:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:38:07.008 09:07:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:38:07.008 09:07:09 -- common/autotest_common.sh@10 -- # set +x 00:38:07.008 ************************************ 00:38:07.008 START TEST nvme_flexible_data_placement 00:38:07.008 ************************************ 00:38:07.008 09:07:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:38:07.576 Initializing NVMe Controllers 00:38:07.576 Attaching to 0000:00:13.0 00:38:07.576 Controller supports FDP Attached to 0000:00:13.0 00:38:07.576 Namespace ID: 1 Endurance Group ID: 1 00:38:07.576 Initialization complete. 00:38:07.576 00:38:07.576 ================================== 00:38:07.576 == FDP tests for Namespace: #01 == 00:38:07.576 ================================== 00:38:07.576 00:38:07.576 Get Feature: FDP: 00:38:07.576 ================= 00:38:07.576 Enabled: Yes 00:38:07.576 FDP configuration Index: 0 00:38:07.576 00:38:07.576 FDP configurations log page 00:38:07.576 =========================== 00:38:07.576 Number of FDP configurations: 1 00:38:07.576 Version: 0 00:38:07.576 Size: 112 00:38:07.576 FDP Configuration Descriptor: 0 00:38:07.576 Descriptor Size: 96 00:38:07.576 Reclaim Group Identifier format: 2 00:38:07.576 FDP Volatile Write Cache: Not Present 00:38:07.576 FDP Configuration: Valid 00:38:07.576 Vendor Specific Size: 0 00:38:07.576 Number of Reclaim Groups: 2 00:38:07.576 Number of Recalim Unit Handles: 8 00:38:07.576 Max Placement Identifiers: 128 00:38:07.576 Number of Namespaces Suppprted: 256 00:38:07.576 Reclaim unit Nominal Size: 6000000 bytes 00:38:07.576 Estimated Reclaim Unit Time Limit: Not Reported 00:38:07.576 RUH Desc #000: RUH Type: Initially Isolated 00:38:07.576 RUH Desc #001: RUH Type: Initially Isolated 00:38:07.576 RUH Desc #002: RUH Type: Initially Isolated 00:38:07.576 RUH Desc #003: RUH Type: Initially Isolated 00:38:07.576 RUH Desc #004: RUH Type: Initially Isolated 00:38:07.576 RUH Desc #005: RUH Type: Initially Isolated 00:38:07.576 RUH Desc #006: RUH Type: Initially Isolated 00:38:07.576 RUH Desc #007: RUH Type: Initially Isolated 00:38:07.576 00:38:07.576 FDP reclaim unit handle usage log page 00:38:07.576 ====================================== 00:38:07.576 Number of Reclaim Unit Handles: 8 00:38:07.576 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:38:07.576 RUH Usage Desc #001: RUH Attributes: Unused 00:38:07.576 RUH Usage Desc #002: RUH Attributes: Unused 00:38:07.576 RUH Usage Desc #003: RUH Attributes: Unused 00:38:07.576 RUH Usage Desc #004: RUH Attributes: Unused 00:38:07.576 RUH Usage Desc #005: RUH Attributes: Unused 00:38:07.576 RUH Usage Desc #006: RUH Attributes: Unused 00:38:07.576 RUH Usage Desc #007: RUH Attributes: Unused 00:38:07.576 00:38:07.576 FDP statistics log page 00:38:07.576 ======================= 00:38:07.576 Host bytes with metadata written: 701849600 00:38:07.576 Media bytes with metadata written: 701992960 00:38:07.576 Media bytes erased: 0 00:38:07.576 00:38:07.576 FDP Reclaim unit handle status 00:38:07.576 ============================== 00:38:07.576 Number of RUHS descriptors: 2 00:38:07.576 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000002aa 00:38:07.576 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:38:07.576 00:38:07.576 FDP write on placement id: 0 success 00:38:07.576 00:38:07.576 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:38:07.576 00:38:07.576 IO mgmt send: RUH update for Placement ID: #0 Success 00:38:07.576 00:38:07.576 Get Feature: FDP Events for Placement handle: #0 00:38:07.576 ======================== 00:38:07.576 Number of FDP Events: 6 00:38:07.576 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:38:07.576 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:38:07.576 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:38:07.576 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:38:07.576 FDP Event: #4 Type: Media Reallocated Enabled: No 00:38:07.576 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:38:07.576 00:38:07.576 FDP events log page 00:38:07.576 =================== 00:38:07.576 Number of FDP events: 1 00:38:07.576 FDP Event #0: 00:38:07.576 Event Type: RU Not Written to Capacity 00:38:07.576 Placement Identifier: Valid 00:38:07.576 NSID: Valid 00:38:07.576 Location: Valid 00:38:07.576 Placement Identifier: 0 00:38:07.576 Event Timestamp: 10 00:38:07.576 Namespace Identifier: 1 00:38:07.576 Reclaim Group Identifier: 0 00:38:07.576 Reclaim Unit Handle Identifier: 0 00:38:07.576 00:38:07.576 FDP test passed 00:38:07.576 00:38:07.576 real 0m0.377s 00:38:07.576 user 0m0.138s 00:38:07.576 sys 0m0.136s 00:38:07.576 09:07:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:38:07.576 09:07:09 -- common/autotest_common.sh@10 -- # set +x 00:38:07.576 ************************************ 00:38:07.576 END TEST nvme_flexible_data_placement 00:38:07.576 ************************************ 00:38:07.576 00:38:07.576 real 0m8.553s 00:38:07.576 user 0m1.318s 00:38:07.576 sys 0m2.224s 00:38:07.576 09:07:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:38:07.576 ************************************ 00:38:07.576 END TEST nvme_fdp 00:38:07.576 09:07:09 -- common/autotest_common.sh@10 -- # set +x 00:38:07.576 ************************************ 00:38:07.576 09:07:09 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:38:07.576 09:07:09 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:38:07.576 09:07:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:38:07.576 09:07:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:38:07.576 09:07:09 -- common/autotest_common.sh@10 -- # set +x 00:38:07.576 ************************************ 00:38:07.576 START TEST nvme_rpc 00:38:07.576 ************************************ 00:38:07.576 09:07:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:38:07.836 * Looking for test storage... 00:38:07.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:38:07.836 09:07:09 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:07.836 09:07:09 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:38:07.836 09:07:09 -- common/autotest_common.sh@1510 -- # bdfs=() 00:38:07.836 09:07:09 -- common/autotest_common.sh@1510 -- # local bdfs 00:38:07.836 09:07:09 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:38:07.836 09:07:09 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:38:07.836 09:07:09 -- common/autotest_common.sh@1499 -- # bdfs=() 00:38:07.836 09:07:09 -- common/autotest_common.sh@1499 -- # local bdfs 00:38:07.836 09:07:09 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:07.836 09:07:09 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:38:07.836 09:07:09 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:38:07.836 09:07:09 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:38:07.836 09:07:09 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:38:07.836 09:07:09 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:38:07.836 09:07:09 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:38:07.836 09:07:09 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=72903 00:38:07.836 09:07:09 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:38:07.836 09:07:09 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:38:07.836 09:07:09 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 72903 00:38:07.836 09:07:09 -- common/autotest_common.sh@817 -- # '[' -z 72903 ']' 00:38:07.836 09:07:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:07.836 09:07:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:38:07.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:07.836 09:07:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:07.836 09:07:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:38:07.836 09:07:09 -- common/autotest_common.sh@10 -- # set +x 00:38:08.098 [2024-04-18 09:07:09.992849] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:38:08.098 [2024-04-18 09:07:09.993073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72903 ] 00:38:08.098 [2024-04-18 09:07:10.189523] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:08.681 [2024-04-18 09:07:10.549534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:08.681 [2024-04-18 09:07:10.549558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:10.086 09:07:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:38:10.086 09:07:11 -- common/autotest_common.sh@850 -- # return 0 00:38:10.086 09:07:11 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:38:10.344 Nvme0n1 00:38:10.344 09:07:12 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:38:10.344 09:07:12 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:38:10.602 request: 00:38:10.602 { 00:38:10.602 "filename": "non_existing_file", 00:38:10.602 "bdev_name": "Nvme0n1", 00:38:10.602 "method": "bdev_nvme_apply_firmware", 00:38:10.602 "req_id": 1 00:38:10.602 } 00:38:10.602 Got JSON-RPC error response 00:38:10.602 response: 00:38:10.602 { 00:38:10.602 "code": -32603, 00:38:10.602 "message": "open file failed." 00:38:10.602 } 00:38:10.602 09:07:12 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:38:10.602 09:07:12 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:38:10.602 09:07:12 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:38:10.860 09:07:12 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:38:10.860 09:07:12 -- nvme/nvme_rpc.sh@40 -- # killprocess 72903 00:38:10.860 09:07:12 -- common/autotest_common.sh@936 -- # '[' -z 72903 ']' 00:38:10.860 09:07:12 -- common/autotest_common.sh@940 -- # kill -0 72903 00:38:10.860 09:07:12 -- common/autotest_common.sh@941 -- # uname 00:38:10.860 09:07:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:38:10.860 09:07:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72903 00:38:10.860 09:07:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:38:10.860 killing process with pid 72903 00:38:10.860 09:07:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:38:10.860 09:07:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72903' 00:38:10.860 09:07:12 -- common/autotest_common.sh@955 -- # kill 72903 00:38:10.860 09:07:12 -- common/autotest_common.sh@960 -- # wait 72903 00:38:14.140 00:38:14.140 real 0m5.871s 00:38:14.140 user 0m10.951s 00:38:14.140 sys 0m0.801s 00:38:14.140 09:07:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:38:14.140 09:07:15 -- common/autotest_common.sh@10 -- # set +x 00:38:14.140 ************************************ 00:38:14.140 END TEST nvme_rpc 00:38:14.140 ************************************ 00:38:14.140 09:07:15 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:38:14.140 09:07:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:38:14.140 09:07:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:38:14.140 09:07:15 -- common/autotest_common.sh@10 -- # set +x 00:38:14.140 ************************************ 00:38:14.140 START TEST nvme_rpc_timeouts 00:38:14.140 ************************************ 00:38:14.140 09:07:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:38:14.140 * Looking for test storage... 00:38:14.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:38:14.140 09:07:15 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:14.140 09:07:15 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_73005 00:38:14.140 09:07:15 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_73005 00:38:14.140 09:07:15 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=73030 00:38:14.140 09:07:15 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:38:14.140 09:07:15 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:38:14.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:14.140 09:07:15 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 73030 00:38:14.140 09:07:15 -- common/autotest_common.sh@817 -- # '[' -z 73030 ']' 00:38:14.140 09:07:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:14.140 09:07:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:38:14.140 09:07:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:14.140 09:07:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:38:14.140 09:07:15 -- common/autotest_common.sh@10 -- # set +x 00:38:14.140 [2024-04-18 09:07:15.817798] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:38:14.140 [2024-04-18 09:07:15.817969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73030 ] 00:38:14.140 [2024-04-18 09:07:16.018186] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:14.398 [2024-04-18 09:07:16.330267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:14.398 [2024-04-18 09:07:16.330284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:15.328 09:07:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:38:15.328 09:07:17 -- common/autotest_common.sh@850 -- # return 0 00:38:15.329 09:07:17 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:38:15.329 Checking default timeout settings: 00:38:15.329 09:07:17 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:38:15.892 Making settings changes with rpc: 00:38:15.892 09:07:17 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:38:15.892 09:07:17 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:38:16.150 Check default vs. modified settings: 00:38:16.150 09:07:18 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:38:16.150 09:07:18 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:38:16.406 09:07:18 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:38:16.406 09:07:18 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:38:16.406 09:07:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_73005 00:38:16.406 09:07:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:38:16.406 09:07:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:38:16.406 09:07:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:38:16.663 09:07:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_73005 00:38:16.663 09:07:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:38:16.663 09:07:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:38:16.664 Setting action_on_timeout is changed as expected. 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_73005 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_73005 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:38:16.664 Setting timeout_us is changed as expected. 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_73005 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_73005 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:38:16.664 Setting timeout_admin_us is changed as expected. 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_73005 /tmp/settings_modified_73005 00:38:16.664 09:07:18 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 73030 00:38:16.664 09:07:18 -- common/autotest_common.sh@936 -- # '[' -z 73030 ']' 00:38:16.664 09:07:18 -- common/autotest_common.sh@940 -- # kill -0 73030 00:38:16.664 09:07:18 -- common/autotest_common.sh@941 -- # uname 00:38:16.664 09:07:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:38:16.664 09:07:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73030 00:38:16.664 09:07:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:38:16.664 killing process with pid 73030 00:38:16.664 09:07:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:38:16.664 09:07:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73030' 00:38:16.664 09:07:18 -- common/autotest_common.sh@955 -- # kill 73030 00:38:16.664 09:07:18 -- common/autotest_common.sh@960 -- # wait 73030 00:38:19.943 RPC TIMEOUT SETTING TEST PASSED. 00:38:19.943 09:07:21 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:38:19.943 00:38:19.943 real 0m5.670s 00:38:19.943 user 0m10.881s 00:38:19.943 sys 0m0.694s 00:38:19.943 09:07:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:38:19.943 09:07:21 -- common/autotest_common.sh@10 -- # set +x 00:38:19.943 ************************************ 00:38:19.943 END TEST nvme_rpc_timeouts 00:38:19.943 ************************************ 00:38:19.943 09:07:21 -- spdk/autotest.sh@241 -- # '[' 1 -eq 0 ']' 00:38:19.943 09:07:21 -- spdk/autotest.sh@245 -- # [[ 1 -eq 1 ]] 00:38:19.943 09:07:21 -- spdk/autotest.sh@246 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:38:19.943 09:07:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:38:19.943 09:07:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:38:19.943 09:07:21 -- common/autotest_common.sh@10 -- # set +x 00:38:19.943 ************************************ 00:38:19.943 START TEST nvme_xnvme 00:38:19.943 ************************************ 00:38:19.943 09:07:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:38:19.943 * Looking for test storage... 00:38:19.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:38:19.943 09:07:21 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:19.943 09:07:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:19.943 09:07:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:19.943 09:07:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:19.943 09:07:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.943 09:07:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.943 09:07:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.943 09:07:21 -- paths/export.sh@5 -- # export PATH 00:38:19.943 09:07:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.943 09:07:21 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:38:19.943 09:07:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:38:19.943 09:07:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:38:19.943 09:07:21 -- common/autotest_common.sh@10 -- # set +x 00:38:19.943 ************************************ 00:38:19.943 START TEST xnvme_to_malloc_dd_copy 00:38:19.943 ************************************ 00:38:19.943 09:07:21 -- common/autotest_common.sh@1111 -- # malloc_to_xnvme_copy 00:38:19.943 09:07:21 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:38:19.943 09:07:21 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:38:19.943 09:07:21 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:38:19.943 09:07:21 -- dd/common.sh@191 -- # return 00:38:19.943 09:07:21 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:38:19.943 09:07:21 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:38:19.943 09:07:21 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:38:19.943 09:07:21 -- xnvme/xnvme.sh@18 -- # local io 00:38:19.943 09:07:21 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:38:19.943 09:07:21 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:38:19.943 09:07:21 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:38:19.943 09:07:21 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:38:19.943 09:07:21 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:38:19.943 09:07:21 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:38:19.943 09:07:21 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:38:19.943 09:07:21 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:38:19.944 09:07:21 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:38:19.944 09:07:21 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:38:19.944 09:07:21 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:38:19.944 09:07:21 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:38:19.944 09:07:21 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:38:19.944 09:07:21 -- xnvme/xnvme.sh@42 -- # gen_conf 00:38:19.944 09:07:21 -- dd/common.sh@31 -- # xtrace_disable 00:38:19.944 09:07:21 -- common/autotest_common.sh@10 -- # set +x 00:38:19.944 { 00:38:19.944 "subsystems": [ 00:38:19.944 { 00:38:19.944 "subsystem": "bdev", 00:38:19.944 "config": [ 00:38:19.944 { 00:38:19.944 "params": { 00:38:19.944 "block_size": 512, 00:38:19.944 "num_blocks": 2097152, 00:38:19.944 "name": "malloc0" 00:38:19.944 }, 00:38:19.944 "method": "bdev_malloc_create" 00:38:19.944 }, 00:38:19.944 { 00:38:19.944 "params": { 00:38:19.944 "io_mechanism": "libaio", 00:38:19.944 "filename": "/dev/nullb0", 00:38:19.944 "name": "null0" 00:38:19.944 }, 00:38:19.944 "method": "bdev_xnvme_create" 00:38:19.944 }, 00:38:19.944 { 00:38:19.944 "method": "bdev_wait_for_examine" 00:38:19.944 } 00:38:19.944 ] 00:38:19.944 } 00:38:19.944 ] 00:38:19.944 } 00:38:19.944 [2024-04-18 09:07:21.804501] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:38:19.944 [2024-04-18 09:07:21.804659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73195 ] 00:38:19.944 [2024-04-18 09:07:21.978025] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.203 [2024-04-18 09:07:22.294587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:32.517  Copying: 187/1024 [MB] (187 MBps) Copying: 426/1024 [MB] (239 MBps) Copying: 657/1024 [MB] (230 MBps) Copying: 860/1024 [MB] (203 MBps) Copying: 1024/1024 [MB] (average 215 MBps) 00:38:32.517 00:38:32.517 09:07:33 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:38:32.517 09:07:33 -- xnvme/xnvme.sh@47 -- # gen_conf 00:38:32.517 09:07:33 -- dd/common.sh@31 -- # xtrace_disable 00:38:32.517 09:07:33 -- common/autotest_common.sh@10 -- # set +x 00:38:32.517 { 00:38:32.517 "subsystems": [ 00:38:32.517 { 00:38:32.517 "subsystem": "bdev", 00:38:32.517 "config": [ 00:38:32.517 { 00:38:32.517 "params": { 00:38:32.517 "block_size": 512, 00:38:32.517 "num_blocks": 2097152, 00:38:32.517 "name": "malloc0" 00:38:32.517 }, 00:38:32.517 "method": "bdev_malloc_create" 00:38:32.517 }, 00:38:32.517 { 00:38:32.517 "params": { 00:38:32.517 "io_mechanism": "libaio", 00:38:32.517 "filename": "/dev/nullb0", 00:38:32.517 "name": "null0" 00:38:32.517 }, 00:38:32.517 "method": "bdev_xnvme_create" 00:38:32.517 }, 00:38:32.518 { 00:38:32.518 "method": "bdev_wait_for_examine" 00:38:32.518 } 00:38:32.518 ] 00:38:32.518 } 00:38:32.518 ] 00:38:32.518 } 00:38:32.518 [2024-04-18 09:07:33.815165] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:38:32.518 [2024-04-18 09:07:33.815294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73328 ] 00:38:32.518 [2024-04-18 09:07:33.990286] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.518 [2024-04-18 09:07:34.284828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:43.755  Copying: 229/1024 [MB] (229 MBps) Copying: 458/1024 [MB] (229 MBps) Copying: 693/1024 [MB] (234 MBps) Copying: 932/1024 [MB] (239 MBps) Copying: 1024/1024 [MB] (average 232 MBps) 00:38:43.755 00:38:43.755 09:07:45 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:38:43.755 09:07:45 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:38:43.755 09:07:45 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:38:43.755 09:07:45 -- xnvme/xnvme.sh@42 -- # gen_conf 00:38:43.755 09:07:45 -- dd/common.sh@31 -- # xtrace_disable 00:38:43.755 09:07:45 -- common/autotest_common.sh@10 -- # set +x 00:38:43.755 { 00:38:43.755 "subsystems": [ 00:38:43.755 { 00:38:43.755 "subsystem": "bdev", 00:38:43.755 "config": [ 00:38:43.755 { 00:38:43.755 "params": { 00:38:43.755 "block_size": 512, 00:38:43.755 "num_blocks": 2097152, 00:38:43.755 "name": "malloc0" 00:38:43.755 }, 00:38:43.755 "method": "bdev_malloc_create" 00:38:43.755 }, 00:38:43.755 { 00:38:43.755 "params": { 00:38:43.755 "io_mechanism": "io_uring", 00:38:43.755 "filename": "/dev/nullb0", 00:38:43.755 "name": "null0" 00:38:43.755 }, 00:38:43.755 "method": "bdev_xnvme_create" 00:38:43.755 }, 00:38:43.755 { 00:38:43.755 "method": "bdev_wait_for_examine" 00:38:43.755 } 00:38:43.755 ] 00:38:43.755 } 00:38:43.755 ] 00:38:43.755 } 00:38:43.755 [2024-04-18 09:07:45.571554] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:38:43.755 [2024-04-18 09:07:45.571742] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73455 ] 00:38:43.755 [2024-04-18 09:07:45.742148] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.013 [2024-04-18 09:07:46.087981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:56.605  Copying: 192/1024 [MB] (192 MBps) Copying: 379/1024 [MB] (187 MBps) Copying: 574/1024 [MB] (195 MBps) Copying: 793/1024 [MB] (218 MBps) Copying: 1008/1024 [MB] (214 MBps) Copying: 1024/1024 [MB] (average 201 MBps) 00:38:56.605 00:38:56.605 09:07:58 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:38:56.605 09:07:58 -- xnvme/xnvme.sh@47 -- # gen_conf 00:38:56.605 09:07:58 -- dd/common.sh@31 -- # xtrace_disable 00:38:56.605 09:07:58 -- common/autotest_common.sh@10 -- # set +x 00:38:56.605 { 00:38:56.605 "subsystems": [ 00:38:56.605 { 00:38:56.605 "subsystem": "bdev", 00:38:56.605 "config": [ 00:38:56.605 { 00:38:56.605 "params": { 00:38:56.605 "block_size": 512, 00:38:56.605 "num_blocks": 2097152, 00:38:56.605 "name": "malloc0" 00:38:56.605 }, 00:38:56.605 "method": "bdev_malloc_create" 00:38:56.605 }, 00:38:56.605 { 00:38:56.605 "params": { 00:38:56.605 "io_mechanism": "io_uring", 00:38:56.605 "filename": "/dev/nullb0", 00:38:56.605 "name": "null0" 00:38:56.605 }, 00:38:56.605 "method": "bdev_xnvme_create" 00:38:56.605 }, 00:38:56.605 { 00:38:56.605 "method": "bdev_wait_for_examine" 00:38:56.605 } 00:38:56.605 ] 00:38:56.605 } 00:38:56.605 ] 00:38:56.605 } 00:38:56.605 [2024-04-18 09:07:58.177505] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:38:56.605 [2024-04-18 09:07:58.177695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73592 ] 00:38:56.605 [2024-04-18 09:07:58.361920] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:56.605 [2024-04-18 09:07:58.696045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:08.471  Copying: 242/1024 [MB] (242 MBps) Copying: 468/1024 [MB] (226 MBps) Copying: 719/1024 [MB] (251 MBps) Copying: 962/1024 [MB] (242 MBps) Copying: 1024/1024 [MB] (average 235 MBps) 00:39:08.471 00:39:08.471 09:08:10 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:39:08.471 09:08:10 -- dd/common.sh@195 -- # modprobe -r null_blk 00:39:08.471 00:39:08.471 real 0m48.514s 00:39:08.471 user 0m42.733s 00:39:08.471 sys 0m5.075s 00:39:08.471 09:08:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:39:08.471 09:08:10 -- common/autotest_common.sh@10 -- # set +x 00:39:08.471 ************************************ 00:39:08.471 END TEST xnvme_to_malloc_dd_copy 00:39:08.471 ************************************ 00:39:08.471 09:08:10 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:39:08.471 09:08:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:39:08.471 09:08:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:39:08.471 09:08:10 -- common/autotest_common.sh@10 -- # set +x 00:39:08.471 ************************************ 00:39:08.471 START TEST xnvme_bdevperf 00:39:08.471 ************************************ 00:39:08.471 09:08:10 -- common/autotest_common.sh@1111 -- # xnvme_bdevperf 00:39:08.471 09:08:10 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:39:08.471 09:08:10 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:39:08.471 09:08:10 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:39:08.471 09:08:10 -- dd/common.sh@191 -- # return 00:39:08.471 09:08:10 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:39:08.471 09:08:10 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:39:08.471 09:08:10 -- xnvme/xnvme.sh@60 -- # local io 00:39:08.471 09:08:10 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:39:08.471 09:08:10 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:39:08.471 09:08:10 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:39:08.471 09:08:10 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:39:08.471 09:08:10 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:39:08.471 09:08:10 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:39:08.471 09:08:10 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:39:08.471 09:08:10 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:39:08.471 09:08:10 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:39:08.471 09:08:10 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:39:08.471 09:08:10 -- xnvme/xnvme.sh@74 -- # gen_conf 00:39:08.471 09:08:10 -- dd/common.sh@31 -- # xtrace_disable 00:39:08.471 09:08:10 -- common/autotest_common.sh@10 -- # set +x 00:39:08.471 { 00:39:08.471 "subsystems": [ 00:39:08.471 { 00:39:08.471 "subsystem": "bdev", 00:39:08.471 "config": [ 00:39:08.471 { 00:39:08.471 "params": { 00:39:08.471 "io_mechanism": "libaio", 00:39:08.471 "filename": "/dev/nullb0", 00:39:08.471 "name": "null0" 00:39:08.471 }, 00:39:08.471 "method": "bdev_xnvme_create" 00:39:08.471 }, 00:39:08.471 { 00:39:08.471 "method": "bdev_wait_for_examine" 00:39:08.471 } 00:39:08.471 ] 00:39:08.471 } 00:39:08.471 ] 00:39:08.471 } 00:39:08.471 [2024-04-18 09:08:10.354147] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:39:08.471 [2024-04-18 09:08:10.354746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73756 ] 00:39:08.471 [2024-04-18 09:08:10.526630] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:08.729 [2024-04-18 09:08:10.809208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:09.364 Running I/O for 5 seconds... 00:39:14.634 00:39:14.634 Latency(us) 00:39:14.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:14.634 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:39:14.634 null0 : 5.00 120711.11 471.53 0.00 0.00 527.07 133.61 1778.83 00:39:14.634 =================================================================================================================== 00:39:14.634 Total : 120711.11 471.53 0.00 0.00 527.07 133.61 1778.83 00:39:16.009 09:08:17 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:39:16.009 09:08:17 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:39:16.009 09:08:17 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:39:16.009 09:08:17 -- xnvme/xnvme.sh@74 -- # gen_conf 00:39:16.009 09:08:17 -- dd/common.sh@31 -- # xtrace_disable 00:39:16.009 09:08:17 -- common/autotest_common.sh@10 -- # set +x 00:39:16.009 { 00:39:16.009 "subsystems": [ 00:39:16.009 { 00:39:16.009 "subsystem": "bdev", 00:39:16.009 "config": [ 00:39:16.009 { 00:39:16.009 "params": { 00:39:16.009 "io_mechanism": "io_uring", 00:39:16.009 "filename": "/dev/nullb0", 00:39:16.009 "name": "null0" 00:39:16.009 }, 00:39:16.009 "method": "bdev_xnvme_create" 00:39:16.009 }, 00:39:16.009 { 00:39:16.009 "method": "bdev_wait_for_examine" 00:39:16.009 } 00:39:16.009 ] 00:39:16.009 } 00:39:16.009 ] 00:39:16.009 } 00:39:16.009 [2024-04-18 09:08:17.799124] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:39:16.009 [2024-04-18 09:08:17.800241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73846 ] 00:39:16.009 [2024-04-18 09:08:17.974657] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:16.268 [2024-04-18 09:08:18.260611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:16.834 Running I/O for 5 seconds... 00:39:22.171 00:39:22.171 Latency(us) 00:39:22.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:22.171 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:39:22.171 null0 : 5.00 184376.37 720.22 0.00 0.00 344.55 201.87 1521.37 00:39:22.171 =================================================================================================================== 00:39:22.171 Total : 184376.37 720.22 0.00 0.00 344.55 201.87 1521.37 00:39:23.544 09:08:25 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:39:23.544 09:08:25 -- dd/common.sh@195 -- # modprobe -r null_blk 00:39:23.544 00:39:23.544 real 0m15.042s 00:39:23.544 user 0m11.525s 00:39:23.544 sys 0m3.264s 00:39:23.544 09:08:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:39:23.544 09:08:25 -- common/autotest_common.sh@10 -- # set +x 00:39:23.544 ************************************ 00:39:23.544 END TEST xnvme_bdevperf 00:39:23.545 ************************************ 00:39:23.545 00:39:23.545 real 1m3.895s 00:39:23.545 user 0m54.377s 00:39:23.545 sys 0m8.539s 00:39:23.545 09:08:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:39:23.545 09:08:25 -- common/autotest_common.sh@10 -- # set +x 00:39:23.545 ************************************ 00:39:23.545 END TEST nvme_xnvme 00:39:23.545 ************************************ 00:39:23.545 09:08:25 -- spdk/autotest.sh@247 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:39:23.545 09:08:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:39:23.545 09:08:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:39:23.545 09:08:25 -- common/autotest_common.sh@10 -- # set +x 00:39:23.545 ************************************ 00:39:23.545 START TEST blockdev_xnvme 00:39:23.545 ************************************ 00:39:23.545 09:08:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:39:23.545 * Looking for test storage... 00:39:23.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:39:23.545 09:08:25 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:39:23.545 09:08:25 -- bdev/nbd_common.sh@6 -- # set -e 00:39:23.545 09:08:25 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:39:23.545 09:08:25 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:23.545 09:08:25 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:39:23.545 09:08:25 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:39:23.545 09:08:25 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:39:23.545 09:08:25 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:39:23.545 09:08:25 -- bdev/blockdev.sh@20 -- # : 00:39:23.545 09:08:25 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:39:23.545 09:08:25 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:39:23.545 09:08:25 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:39:23.545 09:08:25 -- bdev/blockdev.sh@674 -- # uname -s 00:39:23.545 09:08:25 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:39:23.545 09:08:25 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:39:23.545 09:08:25 -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:39:23.545 09:08:25 -- bdev/blockdev.sh@683 -- # crypto_device= 00:39:23.545 09:08:25 -- bdev/blockdev.sh@684 -- # dek= 00:39:23.545 09:08:25 -- bdev/blockdev.sh@685 -- # env_ctx= 00:39:23.545 09:08:25 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:39:23.545 09:08:25 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:39:23.545 09:08:25 -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:39:23.545 09:08:25 -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:39:23.545 09:08:25 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:39:23.545 09:08:25 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73992 00:39:23.545 09:08:25 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:39:23.545 09:08:25 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:39:23.545 09:08:25 -- bdev/blockdev.sh@49 -- # waitforlisten 73992 00:39:23.545 09:08:25 -- common/autotest_common.sh@817 -- # '[' -z 73992 ']' 00:39:23.545 09:08:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:23.545 09:08:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:39:23.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:23.545 09:08:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:23.545 09:08:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:39:23.545 09:08:25 -- common/autotest_common.sh@10 -- # set +x 00:39:23.802 [2024-04-18 09:08:25.665083] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:39:23.802 [2024-04-18 09:08:25.665249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73992 ] 00:39:23.803 [2024-04-18 09:08:25.848897] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:24.061 [2024-04-18 09:08:26.108171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:25.475 09:08:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:39:25.475 09:08:27 -- common/autotest_common.sh@850 -- # return 0 00:39:25.475 09:08:27 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:39:25.475 09:08:27 -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:39:25.475 09:08:27 -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:39:25.475 09:08:27 -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:39:25.475 09:08:27 -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:39:25.733 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:25.991 Waiting for block devices as requested 00:39:25.991 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:39:25.991 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:39:26.250 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:39:26.250 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:39:31.529 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:39:31.529 09:08:33 -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:39:31.529 09:08:33 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:39:31.529 09:08:33 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:39:31.529 09:08:33 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:39:31.529 09:08:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:39:31.529 09:08:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:39:31.529 09:08:33 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:39:31.529 09:08:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:31.529 09:08:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:39:31.529 09:08:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:39:31.529 09:08:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:39:31.529 09:08:33 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:39:31.529 09:08:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:39:31.529 09:08:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:39:31.529 09:08:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:39:31.529 09:08:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:39:31.529 09:08:33 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:39:31.529 09:08:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:39:31.529 09:08:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:39:31.529 09:08:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:39:31.529 09:08:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:39:31.529 09:08:33 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:39:31.529 09:08:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:39:31.529 09:08:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:39:31.529 09:08:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:39:31.529 09:08:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:39:31.529 09:08:33 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:39:31.529 09:08:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:39:31.529 09:08:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:39:31.529 09:08:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:39:31.529 09:08:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:39:31.529 09:08:33 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:39:31.530 09:08:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:39:31.530 09:08:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:39:31.530 09:08:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:39:31.530 09:08:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:39:31.530 09:08:33 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:39:31.530 09:08:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:39:31.530 09:08:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:39:31.530 09:08:33 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:39:31.530 09:08:33 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:39:31.530 09:08:33 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:39:31.530 09:08:33 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:39:31.530 09:08:33 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:39:31.530 09:08:33 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:39:31.530 09:08:33 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:39:31.530 09:08:33 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:39:31.530 09:08:33 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:39:31.530 09:08:33 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:39:31.530 09:08:33 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:39:31.530 09:08:33 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:39:31.530 09:08:33 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:39:31.530 09:08:33 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:39:31.530 09:08:33 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:39:31.530 09:08:33 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:39:31.530 09:08:33 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:39:31.530 09:08:33 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:39:31.530 09:08:33 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:39:31.530 09:08:33 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:39:31.530 09:08:33 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:39:31.530 09:08:33 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:39:31.530 09:08:33 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:39:31.530 09:08:33 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:39:31.530 09:08:33 -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:39:31.530 09:08:33 -- bdev/blockdev.sh@100 -- # rpc_cmd 00:39:31.530 09:08:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:39:31.530 09:08:33 -- common/autotest_common.sh@10 -- # set +x 00:39:31.530 09:08:33 -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:39:31.530 nvme0n1 00:39:31.530 nvme1n1 00:39:31.530 nvme2n1 00:39:31.530 nvme2n2 00:39:31.530 nvme2n3 00:39:31.530 nvme3n1 00:39:31.530 09:08:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:39:31.530 09:08:33 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:39:31.530 09:08:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:39:31.530 09:08:33 -- common/autotest_common.sh@10 -- # set +x 00:39:31.530 09:08:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:39:31.530 09:08:33 -- bdev/blockdev.sh@740 -- # cat 00:39:31.530 09:08:33 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:39:31.530 09:08:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:39:31.530 09:08:33 -- common/autotest_common.sh@10 -- # set +x 00:39:31.530 09:08:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:39:31.530 09:08:33 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:39:31.530 09:08:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:39:31.530 09:08:33 -- common/autotest_common.sh@10 -- # set +x 00:39:31.530 09:08:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:39:31.530 09:08:33 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:39:31.530 09:08:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:39:31.530 09:08:33 -- common/autotest_common.sh@10 -- # set +x 00:39:31.530 09:08:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:39:31.530 09:08:33 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:39:31.530 09:08:33 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:39:31.530 09:08:33 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:39:31.530 09:08:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:39:31.530 09:08:33 -- common/autotest_common.sh@10 -- # set +x 00:39:31.530 09:08:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:39:31.530 09:08:33 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:39:31.530 09:08:33 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "87f68c92-9ee6-43c9-8e4b-ce6d7a1ec909"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "87f68c92-9ee6-43c9-8e4b-ce6d7a1ec909",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "041de362-981f-427b-a70b-abdd6e25b046"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "041de362-981f-427b-a70b-abdd6e25b046",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "f4f3c8ab-4ff7-46f9-8e69-e6847fd9def3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f4f3c8ab-4ff7-46f9-8e69-e6847fd9def3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "ff3ad11c-7f09-450a-a34d-3d11e2eaef22"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ff3ad11c-7f09-450a-a34d-3d11e2eaef22",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "26efc298-34d0-406f-8482-070ec3398354"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "26efc298-34d0-406f-8482-070ec3398354",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "1b8a846f-41a3-44cc-a20d-1f430d5fcacf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1b8a846f-41a3-44cc-a20d-1f430d5fcacf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:39:31.530 09:08:33 -- bdev/blockdev.sh@749 -- # jq -r .name 00:39:31.788 09:08:33 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:39:31.788 09:08:33 -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:39:31.788 09:08:33 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:39:31.788 09:08:33 -- bdev/blockdev.sh@754 -- # killprocess 73992 00:39:31.788 09:08:33 -- common/autotest_common.sh@936 -- # '[' -z 73992 ']' 00:39:31.788 09:08:33 -- common/autotest_common.sh@940 -- # kill -0 73992 00:39:31.788 09:08:33 -- common/autotest_common.sh@941 -- # uname 00:39:31.788 09:08:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:39:31.788 09:08:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73992 00:39:31.788 09:08:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:39:31.788 09:08:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:39:31.788 09:08:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73992' 00:39:31.788 killing process with pid 73992 00:39:31.788 09:08:33 -- common/autotest_common.sh@955 -- # kill 73992 00:39:31.788 09:08:33 -- common/autotest_common.sh@960 -- # wait 73992 00:39:35.099 09:08:36 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:35.099 09:08:36 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:39:35.099 09:08:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:39:35.099 09:08:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:39:35.099 09:08:36 -- common/autotest_common.sh@10 -- # set +x 00:39:35.099 ************************************ 00:39:35.099 START TEST bdev_hello_world 00:39:35.099 ************************************ 00:39:35.099 09:08:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:39:35.099 [2024-04-18 09:08:36.671057] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:39:35.099 [2024-04-18 09:08:36.671401] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74382 ] 00:39:35.099 [2024-04-18 09:08:36.847213] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:35.099 [2024-04-18 09:08:37.112973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:35.677 [2024-04-18 09:08:37.643713] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:39:35.677 [2024-04-18 09:08:37.643948] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:39:35.677 [2024-04-18 09:08:37.644022] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:39:35.677 [2024-04-18 09:08:37.646410] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:39:35.677 [2024-04-18 09:08:37.646829] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:39:35.677 [2024-04-18 09:08:37.646955] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:39:35.677 [2024-04-18 09:08:37.647202] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:39:35.677 00:39:35.677 [2024-04-18 09:08:37.647321] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:39:37.098 00:39:37.098 real 0m2.518s 00:39:37.098 user 0m2.131s 00:39:37.098 sys 0m0.261s 00:39:37.098 09:08:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:39:37.098 09:08:39 -- common/autotest_common.sh@10 -- # set +x 00:39:37.098 ************************************ 00:39:37.098 END TEST bdev_hello_world 00:39:37.098 ************************************ 00:39:37.098 09:08:39 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:39:37.098 09:08:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:39:37.098 09:08:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:39:37.098 09:08:39 -- common/autotest_common.sh@10 -- # set +x 00:39:37.357 ************************************ 00:39:37.357 START TEST bdev_bounds 00:39:37.357 ************************************ 00:39:37.357 09:08:39 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:39:37.357 09:08:39 -- bdev/blockdev.sh@290 -- # bdevio_pid=74434 00:39:37.357 09:08:39 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:39:37.357 09:08:39 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 74434' 00:39:37.357 Process bdevio pid: 74434 00:39:37.357 09:08:39 -- bdev/blockdev.sh@293 -- # waitforlisten 74434 00:39:37.357 09:08:39 -- common/autotest_common.sh@817 -- # '[' -z 74434 ']' 00:39:37.357 09:08:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:37.357 09:08:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:39:37.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:37.357 09:08:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:37.357 09:08:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:39:37.357 09:08:39 -- common/autotest_common.sh@10 -- # set +x 00:39:37.357 09:08:39 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:39:37.357 [2024-04-18 09:08:39.336467] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:39:37.357 [2024-04-18 09:08:39.336938] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74434 ] 00:39:37.616 [2024-04-18 09:08:39.532656] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:37.874 [2024-04-18 09:08:39.836475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:37.874 [2024-04-18 09:08:39.836481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:37.874 [2024-04-18 09:08:39.836500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:39:38.440 09:08:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:39:38.440 09:08:40 -- common/autotest_common.sh@850 -- # return 0 00:39:38.440 09:08:40 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:39:38.440 I/O targets: 00:39:38.440 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:39:38.440 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:39:38.440 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:39:38.440 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:39:38.440 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:39:38.440 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:39:38.440 00:39:38.440 00:39:38.440 CUnit - A unit testing framework for C - Version 2.1-3 00:39:38.440 http://cunit.sourceforge.net/ 00:39:38.440 00:39:38.440 00:39:38.440 Suite: bdevio tests on: nvme3n1 00:39:38.440 Test: blockdev write read block ...passed 00:39:38.440 Test: blockdev write zeroes read block ...passed 00:39:38.440 Test: blockdev write zeroes read no split ...passed 00:39:38.698 Test: blockdev write zeroes read split ...passed 00:39:38.698 Test: blockdev write zeroes read split partial ...passed 00:39:38.698 Test: blockdev reset ...passed 00:39:38.698 Test: blockdev write read 8 blocks ...passed 00:39:38.698 Test: blockdev write read size > 128k ...passed 00:39:38.698 Test: blockdev write read invalid size ...passed 00:39:38.698 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:38.698 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:38.698 Test: blockdev write read max offset ...passed 00:39:38.698 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:38.698 Test: blockdev writev readv 8 blocks ...passed 00:39:38.698 Test: blockdev writev readv 30 x 1block ...passed 00:39:38.698 Test: blockdev writev readv block ...passed 00:39:38.698 Test: blockdev writev readv size > 128k ...passed 00:39:38.698 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:38.698 Test: blockdev comparev and writev ...passed 00:39:38.698 Test: blockdev nvme passthru rw ...passed 00:39:38.698 Test: blockdev nvme passthru vendor specific ...passed 00:39:38.698 Test: blockdev nvme admin passthru ...passed 00:39:38.698 Test: blockdev copy ...passed 00:39:38.698 Suite: bdevio tests on: nvme2n3 00:39:38.698 Test: blockdev write read block ...passed 00:39:38.698 Test: blockdev write zeroes read block ...passed 00:39:38.698 Test: blockdev write zeroes read no split ...passed 00:39:38.698 Test: blockdev write zeroes read split ...passed 00:39:38.698 Test: blockdev write zeroes read split partial ...passed 00:39:38.698 Test: blockdev reset ...passed 00:39:38.698 Test: blockdev write read 8 blocks ...passed 00:39:38.698 Test: blockdev write read size > 128k ...passed 00:39:38.698 Test: blockdev write read invalid size ...passed 00:39:38.698 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:38.698 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:38.698 Test: blockdev write read max offset ...passed 00:39:38.698 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:38.698 Test: blockdev writev readv 8 blocks ...passed 00:39:38.698 Test: blockdev writev readv 30 x 1block ...passed 00:39:38.698 Test: blockdev writev readv block ...passed 00:39:38.698 Test: blockdev writev readv size > 128k ...passed 00:39:38.698 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:38.698 Test: blockdev comparev and writev ...passed 00:39:38.698 Test: blockdev nvme passthru rw ...passed 00:39:38.698 Test: blockdev nvme passthru vendor specific ...passed 00:39:38.698 Test: blockdev nvme admin passthru ...passed 00:39:38.698 Test: blockdev copy ...passed 00:39:38.698 Suite: bdevio tests on: nvme2n2 00:39:38.698 Test: blockdev write read block ...passed 00:39:38.698 Test: blockdev write zeroes read block ...passed 00:39:38.698 Test: blockdev write zeroes read no split ...passed 00:39:38.698 Test: blockdev write zeroes read split ...passed 00:39:38.957 Test: blockdev write zeroes read split partial ...passed 00:39:38.957 Test: blockdev reset ...passed 00:39:38.957 Test: blockdev write read 8 blocks ...passed 00:39:38.957 Test: blockdev write read size > 128k ...passed 00:39:38.957 Test: blockdev write read invalid size ...passed 00:39:38.957 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:38.957 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:38.957 Test: blockdev write read max offset ...passed 00:39:38.957 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:38.957 Test: blockdev writev readv 8 blocks ...passed 00:39:38.957 Test: blockdev writev readv 30 x 1block ...passed 00:39:38.957 Test: blockdev writev readv block ...passed 00:39:38.957 Test: blockdev writev readv size > 128k ...passed 00:39:38.957 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:38.957 Test: blockdev comparev and writev ...passed 00:39:38.957 Test: blockdev nvme passthru rw ...passed 00:39:38.957 Test: blockdev nvme passthru vendor specific ...passed 00:39:38.957 Test: blockdev nvme admin passthru ...passed 00:39:38.957 Test: blockdev copy ...passed 00:39:38.957 Suite: bdevio tests on: nvme2n1 00:39:38.957 Test: blockdev write read block ...passed 00:39:38.957 Test: blockdev write zeroes read block ...passed 00:39:38.957 Test: blockdev write zeroes read no split ...passed 00:39:38.957 Test: blockdev write zeroes read split ...passed 00:39:38.957 Test: blockdev write zeroes read split partial ...passed 00:39:38.957 Test: blockdev reset ...passed 00:39:38.957 Test: blockdev write read 8 blocks ...passed 00:39:38.957 Test: blockdev write read size > 128k ...passed 00:39:38.957 Test: blockdev write read invalid size ...passed 00:39:38.957 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:38.957 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:38.957 Test: blockdev write read max offset ...passed 00:39:38.957 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:38.957 Test: blockdev writev readv 8 blocks ...passed 00:39:38.957 Test: blockdev writev readv 30 x 1block ...passed 00:39:38.957 Test: blockdev writev readv block ...passed 00:39:38.957 Test: blockdev writev readv size > 128k ...passed 00:39:38.957 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:38.957 Test: blockdev comparev and writev ...passed 00:39:38.957 Test: blockdev nvme passthru rw ...passed 00:39:38.957 Test: blockdev nvme passthru vendor specific ...passed 00:39:38.957 Test: blockdev nvme admin passthru ...passed 00:39:38.957 Test: blockdev copy ...passed 00:39:38.957 Suite: bdevio tests on: nvme1n1 00:39:38.957 Test: blockdev write read block ...passed 00:39:38.957 Test: blockdev write zeroes read block ...passed 00:39:38.957 Test: blockdev write zeroes read no split ...passed 00:39:38.957 Test: blockdev write zeroes read split ...passed 00:39:38.957 Test: blockdev write zeroes read split partial ...passed 00:39:38.957 Test: blockdev reset ...passed 00:39:38.957 Test: blockdev write read 8 blocks ...passed 00:39:38.957 Test: blockdev write read size > 128k ...passed 00:39:38.958 Test: blockdev write read invalid size ...passed 00:39:38.958 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:38.958 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:38.958 Test: blockdev write read max offset ...passed 00:39:38.958 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:38.958 Test: blockdev writev readv 8 blocks ...passed 00:39:38.958 Test: blockdev writev readv 30 x 1block ...passed 00:39:38.958 Test: blockdev writev readv block ...passed 00:39:38.958 Test: blockdev writev readv size > 128k ...passed 00:39:38.958 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:38.958 Test: blockdev comparev and writev ...passed 00:39:38.958 Test: blockdev nvme passthru rw ...passed 00:39:38.958 Test: blockdev nvme passthru vendor specific ...passed 00:39:38.958 Test: blockdev nvme admin passthru ...passed 00:39:38.958 Test: blockdev copy ...passed 00:39:38.958 Suite: bdevio tests on: nvme0n1 00:39:38.958 Test: blockdev write read block ...passed 00:39:38.958 Test: blockdev write zeroes read block ...passed 00:39:38.958 Test: blockdev write zeroes read no split ...passed 00:39:38.958 Test: blockdev write zeroes read split ...passed 00:39:39.217 Test: blockdev write zeroes read split partial ...passed 00:39:39.217 Test: blockdev reset ...passed 00:39:39.217 Test: blockdev write read 8 blocks ...passed 00:39:39.217 Test: blockdev write read size > 128k ...passed 00:39:39.217 Test: blockdev write read invalid size ...passed 00:39:39.217 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:39.217 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:39.217 Test: blockdev write read max offset ...passed 00:39:39.217 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:39.217 Test: blockdev writev readv 8 blocks ...passed 00:39:39.217 Test: blockdev writev readv 30 x 1block ...passed 00:39:39.217 Test: blockdev writev readv block ...passed 00:39:39.217 Test: blockdev writev readv size > 128k ...passed 00:39:39.217 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:39.217 Test: blockdev comparev and writev ...passed 00:39:39.217 Test: blockdev nvme passthru rw ...passed 00:39:39.217 Test: blockdev nvme passthru vendor specific ...passed 00:39:39.217 Test: blockdev nvme admin passthru ...passed 00:39:39.217 Test: blockdev copy ...passed 00:39:39.217 00:39:39.217 Run Summary: Type Total Ran Passed Failed Inactive 00:39:39.217 suites 6 6 n/a 0 0 00:39:39.217 tests 138 138 138 0 0 00:39:39.217 asserts 780 780 780 0 n/a 00:39:39.217 00:39:39.217 Elapsed time = 1.574 seconds 00:39:39.217 0 00:39:39.217 09:08:41 -- bdev/blockdev.sh@295 -- # killprocess 74434 00:39:39.217 09:08:41 -- common/autotest_common.sh@936 -- # '[' -z 74434 ']' 00:39:39.217 09:08:41 -- common/autotest_common.sh@940 -- # kill -0 74434 00:39:39.217 09:08:41 -- common/autotest_common.sh@941 -- # uname 00:39:39.217 09:08:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:39:39.217 09:08:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74434 00:39:39.217 09:08:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:39:39.217 09:08:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:39:39.217 09:08:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74434' 00:39:39.217 killing process with pid 74434 00:39:39.217 09:08:41 -- common/autotest_common.sh@955 -- # kill 74434 00:39:39.217 09:08:41 -- common/autotest_common.sh@960 -- # wait 74434 00:39:40.593 09:08:42 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:39:40.593 00:39:40.593 real 0m3.407s 00:39:40.593 user 0m7.860s 00:39:40.593 sys 0m0.474s 00:39:40.593 09:08:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:39:40.593 09:08:42 -- common/autotest_common.sh@10 -- # set +x 00:39:40.593 ************************************ 00:39:40.593 END TEST bdev_bounds 00:39:40.593 ************************************ 00:39:40.593 09:08:42 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:39:40.593 09:08:42 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:39:40.593 09:08:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:39:40.593 09:08:42 -- common/autotest_common.sh@10 -- # set +x 00:39:40.852 ************************************ 00:39:40.852 START TEST bdev_nbd 00:39:40.852 ************************************ 00:39:40.852 09:08:42 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:39:40.852 09:08:42 -- bdev/blockdev.sh@300 -- # uname -s 00:39:40.852 09:08:42 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:39:40.852 09:08:42 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:40.852 09:08:42 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:40.852 09:08:42 -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:39:40.852 09:08:42 -- bdev/blockdev.sh@304 -- # local bdev_all 00:39:40.852 09:08:42 -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:39:40.852 09:08:42 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:39:40.852 09:08:42 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:39:40.852 09:08:42 -- bdev/blockdev.sh@311 -- # local nbd_all 00:39:40.853 09:08:42 -- bdev/blockdev.sh@312 -- # bdev_num=6 00:39:40.853 09:08:42 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:39:40.853 09:08:42 -- bdev/blockdev.sh@314 -- # local nbd_list 00:39:40.853 09:08:42 -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:39:40.853 09:08:42 -- bdev/blockdev.sh@315 -- # local bdev_list 00:39:40.853 09:08:42 -- bdev/blockdev.sh@318 -- # nbd_pid=74509 00:39:40.853 09:08:42 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:39:40.853 09:08:42 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:39:40.853 09:08:42 -- bdev/blockdev.sh@320 -- # waitforlisten 74509 /var/tmp/spdk-nbd.sock 00:39:40.853 09:08:42 -- common/autotest_common.sh@817 -- # '[' -z 74509 ']' 00:39:40.853 09:08:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:39:40.853 09:08:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:39:40.853 09:08:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:39:40.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:39:40.853 09:08:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:39:40.853 09:08:42 -- common/autotest_common.sh@10 -- # set +x 00:39:40.853 [2024-04-18 09:08:42.876310] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:39:40.853 [2024-04-18 09:08:42.876470] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:41.112 [2024-04-18 09:08:43.047540] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:41.371 [2024-04-18 09:08:43.328027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:41.939 09:08:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:39:41.939 09:08:43 -- common/autotest_common.sh@850 -- # return 0 00:39:41.939 09:08:43 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:39:41.939 09:08:43 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:41.939 09:08:43 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:39:41.939 09:08:43 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:39:41.939 09:08:43 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:39:41.939 09:08:43 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:41.939 09:08:43 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:39:41.939 09:08:43 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:39:41.939 09:08:43 -- bdev/nbd_common.sh@24 -- # local i 00:39:41.939 09:08:43 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:39:41.939 09:08:43 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:39:41.939 09:08:43 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:39:41.939 09:08:43 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:39:42.197 09:08:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:39:42.197 09:08:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:39:42.197 09:08:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:39:42.197 09:08:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:39:42.197 09:08:44 -- common/autotest_common.sh@855 -- # local i 00:39:42.197 09:08:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:42.198 09:08:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:42.198 09:08:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:39:42.198 09:08:44 -- common/autotest_common.sh@859 -- # break 00:39:42.198 09:08:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:42.198 09:08:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:42.198 09:08:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:42.198 1+0 records in 00:39:42.198 1+0 records out 00:39:42.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536422 s, 7.6 MB/s 00:39:42.198 09:08:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:42.198 09:08:44 -- common/autotest_common.sh@872 -- # size=4096 00:39:42.198 09:08:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:42.198 09:08:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:42.198 09:08:44 -- common/autotest_common.sh@875 -- # return 0 00:39:42.198 09:08:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:42.198 09:08:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:39:42.198 09:08:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:39:42.456 09:08:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:39:42.456 09:08:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:39:42.456 09:08:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:39:42.456 09:08:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:39:42.456 09:08:44 -- common/autotest_common.sh@855 -- # local i 00:39:42.456 09:08:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:42.456 09:08:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:42.456 09:08:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:39:42.456 09:08:44 -- common/autotest_common.sh@859 -- # break 00:39:42.456 09:08:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:42.456 09:08:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:42.456 09:08:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:42.456 1+0 records in 00:39:42.456 1+0 records out 00:39:42.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116822 s, 3.5 MB/s 00:39:42.456 09:08:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:42.456 09:08:44 -- common/autotest_common.sh@872 -- # size=4096 00:39:42.456 09:08:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:42.456 09:08:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:42.456 09:08:44 -- common/autotest_common.sh@875 -- # return 0 00:39:42.456 09:08:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:42.456 09:08:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:39:42.456 09:08:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:39:42.715 09:08:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:39:42.715 09:08:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:39:42.715 09:08:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:39:42.715 09:08:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:39:42.715 09:08:44 -- common/autotest_common.sh@855 -- # local i 00:39:42.715 09:08:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:42.715 09:08:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:42.715 09:08:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:39:42.715 09:08:44 -- common/autotest_common.sh@859 -- # break 00:39:42.715 09:08:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:42.715 09:08:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:42.715 09:08:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:42.974 1+0 records in 00:39:42.974 1+0 records out 00:39:42.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582263 s, 7.0 MB/s 00:39:42.974 09:08:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:42.974 09:08:44 -- common/autotest_common.sh@872 -- # size=4096 00:39:42.974 09:08:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:42.974 09:08:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:42.974 09:08:44 -- common/autotest_common.sh@875 -- # return 0 00:39:42.974 09:08:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:42.974 09:08:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:39:42.974 09:08:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:39:43.240 09:08:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:39:43.240 09:08:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:39:43.240 09:08:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:39:43.240 09:08:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:39:43.240 09:08:45 -- common/autotest_common.sh@855 -- # local i 00:39:43.240 09:08:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:43.240 09:08:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:43.241 09:08:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:39:43.241 09:08:45 -- common/autotest_common.sh@859 -- # break 00:39:43.241 09:08:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:43.241 09:08:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:43.241 09:08:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:43.241 1+0 records in 00:39:43.241 1+0 records out 00:39:43.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556936 s, 7.4 MB/s 00:39:43.241 09:08:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:43.241 09:08:45 -- common/autotest_common.sh@872 -- # size=4096 00:39:43.241 09:08:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:43.241 09:08:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:43.241 09:08:45 -- common/autotest_common.sh@875 -- # return 0 00:39:43.241 09:08:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:43.241 09:08:45 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:39:43.241 09:08:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:39:43.531 09:08:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:39:43.531 09:08:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:39:43.531 09:08:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:39:43.531 09:08:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:39:43.531 09:08:45 -- common/autotest_common.sh@855 -- # local i 00:39:43.531 09:08:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:43.531 09:08:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:43.531 09:08:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:39:43.531 09:08:45 -- common/autotest_common.sh@859 -- # break 00:39:43.531 09:08:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:43.531 09:08:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:43.531 09:08:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:43.531 1+0 records in 00:39:43.531 1+0 records out 00:39:43.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568848 s, 7.2 MB/s 00:39:43.532 09:08:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:43.532 09:08:45 -- common/autotest_common.sh@872 -- # size=4096 00:39:43.532 09:08:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:43.532 09:08:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:43.532 09:08:45 -- common/autotest_common.sh@875 -- # return 0 00:39:43.532 09:08:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:43.532 09:08:45 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:39:43.532 09:08:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:39:43.790 09:08:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:39:43.790 09:08:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:39:43.790 09:08:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:39:43.790 09:08:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:39:43.790 09:08:45 -- common/autotest_common.sh@855 -- # local i 00:39:43.790 09:08:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:43.790 09:08:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:43.790 09:08:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:39:43.790 09:08:45 -- common/autotest_common.sh@859 -- # break 00:39:43.790 09:08:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:43.790 09:08:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:43.790 09:08:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:43.790 1+0 records in 00:39:43.790 1+0 records out 00:39:43.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582502 s, 7.0 MB/s 00:39:43.790 09:08:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:43.790 09:08:45 -- common/autotest_common.sh@872 -- # size=4096 00:39:43.790 09:08:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:43.790 09:08:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:43.790 09:08:45 -- common/autotest_common.sh@875 -- # return 0 00:39:43.790 09:08:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:43.790 09:08:45 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:39:43.790 09:08:45 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:44.049 09:08:46 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:39:44.049 { 00:39:44.049 "nbd_device": "/dev/nbd0", 00:39:44.049 "bdev_name": "nvme0n1" 00:39:44.049 }, 00:39:44.049 { 00:39:44.049 "nbd_device": "/dev/nbd1", 00:39:44.049 "bdev_name": "nvme1n1" 00:39:44.049 }, 00:39:44.049 { 00:39:44.049 "nbd_device": "/dev/nbd2", 00:39:44.049 "bdev_name": "nvme2n1" 00:39:44.049 }, 00:39:44.049 { 00:39:44.049 "nbd_device": "/dev/nbd3", 00:39:44.049 "bdev_name": "nvme2n2" 00:39:44.049 }, 00:39:44.049 { 00:39:44.049 "nbd_device": "/dev/nbd4", 00:39:44.049 "bdev_name": "nvme2n3" 00:39:44.049 }, 00:39:44.049 { 00:39:44.049 "nbd_device": "/dev/nbd5", 00:39:44.049 "bdev_name": "nvme3n1" 00:39:44.049 } 00:39:44.049 ]' 00:39:44.049 09:08:46 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:39:44.049 09:08:46 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:39:44.049 09:08:46 -- bdev/nbd_common.sh@119 -- # echo '[ 00:39:44.049 { 00:39:44.049 "nbd_device": "/dev/nbd0", 00:39:44.049 "bdev_name": "nvme0n1" 00:39:44.049 }, 00:39:44.049 { 00:39:44.049 "nbd_device": "/dev/nbd1", 00:39:44.049 "bdev_name": "nvme1n1" 00:39:44.049 }, 00:39:44.049 { 00:39:44.049 "nbd_device": "/dev/nbd2", 00:39:44.049 "bdev_name": "nvme2n1" 00:39:44.049 }, 00:39:44.049 { 00:39:44.049 "nbd_device": "/dev/nbd3", 00:39:44.049 "bdev_name": "nvme2n2" 00:39:44.049 }, 00:39:44.049 { 00:39:44.049 "nbd_device": "/dev/nbd4", 00:39:44.049 "bdev_name": "nvme2n3" 00:39:44.049 }, 00:39:44.049 { 00:39:44.049 "nbd_device": "/dev/nbd5", 00:39:44.049 "bdev_name": "nvme3n1" 00:39:44.049 } 00:39:44.049 ]' 00:39:44.049 09:08:46 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:39:44.049 09:08:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:44.049 09:08:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:39:44.049 09:08:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:44.049 09:08:46 -- bdev/nbd_common.sh@51 -- # local i 00:39:44.049 09:08:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:44.049 09:08:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:44.308 09:08:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:44.308 09:08:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:44.308 09:08:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:44.308 09:08:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:44.308 09:08:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:44.308 09:08:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:44.308 09:08:46 -- bdev/nbd_common.sh@41 -- # break 00:39:44.308 09:08:46 -- bdev/nbd_common.sh@45 -- # return 0 00:39:44.308 09:08:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:44.308 09:08:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:39:44.875 09:08:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:44.875 09:08:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:44.875 09:08:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:44.875 09:08:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:44.875 09:08:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:44.875 09:08:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:44.875 09:08:46 -- bdev/nbd_common.sh@41 -- # break 00:39:44.875 09:08:46 -- bdev/nbd_common.sh@45 -- # return 0 00:39:44.875 09:08:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:44.875 09:08:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:39:45.134 09:08:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:39:45.134 09:08:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:39:45.134 09:08:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:39:45.134 09:08:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:45.134 09:08:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:45.134 09:08:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:39:45.134 09:08:47 -- bdev/nbd_common.sh@41 -- # break 00:39:45.134 09:08:47 -- bdev/nbd_common.sh@45 -- # return 0 00:39:45.134 09:08:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:45.134 09:08:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:39:45.134 09:08:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:39:45.394 09:08:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:39:45.394 09:08:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:39:45.394 09:08:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:45.394 09:08:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:45.394 09:08:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:39:45.394 09:08:47 -- bdev/nbd_common.sh@41 -- # break 00:39:45.394 09:08:47 -- bdev/nbd_common.sh@45 -- # return 0 00:39:45.394 09:08:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:45.394 09:08:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:39:45.394 09:08:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:39:45.394 09:08:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:39:45.394 09:08:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:39:45.394 09:08:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:45.394 09:08:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:45.394 09:08:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:39:45.394 09:08:47 -- bdev/nbd_common.sh@41 -- # break 00:39:45.394 09:08:47 -- bdev/nbd_common.sh@45 -- # return 0 00:39:45.394 09:08:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:45.394 09:08:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:39:45.652 09:08:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:39:45.935 09:08:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:39:45.935 09:08:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:39:45.935 09:08:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:45.935 09:08:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:45.935 09:08:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:39:45.935 09:08:47 -- bdev/nbd_common.sh@41 -- # break 00:39:45.935 09:08:47 -- bdev/nbd_common.sh@45 -- # return 0 00:39:45.935 09:08:47 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:45.935 09:08:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:45.935 09:08:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:45.935 09:08:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:45.935 09:08:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:45.935 09:08:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:45.935 09:08:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:45.935 09:08:48 -- bdev/nbd_common.sh@65 -- # echo '' 00:39:45.935 09:08:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:45.935 09:08:48 -- bdev/nbd_common.sh@65 -- # true 00:39:45.935 09:08:48 -- bdev/nbd_common.sh@65 -- # count=0 00:39:45.935 09:08:48 -- bdev/nbd_common.sh@66 -- # echo 0 00:39:45.935 09:08:48 -- bdev/nbd_common.sh@122 -- # count=0 00:39:45.935 09:08:48 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:39:45.935 09:08:48 -- bdev/nbd_common.sh@127 -- # return 0 00:39:45.935 09:08:48 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:39:45.935 09:08:48 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:45.935 09:08:48 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:39:45.935 09:08:48 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:39:46.194 09:08:48 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:39:46.194 09:08:48 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:39:46.194 09:08:48 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:39:46.194 09:08:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:46.194 09:08:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:39:46.194 09:08:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:46.194 09:08:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:39:46.194 09:08:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:46.194 09:08:48 -- bdev/nbd_common.sh@12 -- # local i 00:39:46.194 09:08:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:46.194 09:08:48 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:46.194 09:08:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:39:46.194 /dev/nbd0 00:39:46.194 09:08:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:46.194 09:08:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:46.194 09:08:48 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:39:46.194 09:08:48 -- common/autotest_common.sh@855 -- # local i 00:39:46.194 09:08:48 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:46.194 09:08:48 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:46.194 09:08:48 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:39:46.453 09:08:48 -- common/autotest_common.sh@859 -- # break 00:39:46.453 09:08:48 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:46.453 09:08:48 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:46.453 09:08:48 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:46.453 1+0 records in 00:39:46.453 1+0 records out 00:39:46.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618104 s, 6.6 MB/s 00:39:46.453 09:08:48 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:46.453 09:08:48 -- common/autotest_common.sh@872 -- # size=4096 00:39:46.453 09:08:48 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:46.453 09:08:48 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:46.453 09:08:48 -- common/autotest_common.sh@875 -- # return 0 00:39:46.453 09:08:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:46.453 09:08:48 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:46.453 09:08:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:39:46.453 /dev/nbd1 00:39:46.453 09:08:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:46.453 09:08:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:46.453 09:08:48 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:39:46.453 09:08:48 -- common/autotest_common.sh@855 -- # local i 00:39:46.453 09:08:48 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:46.453 09:08:48 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:46.453 09:08:48 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:39:46.453 09:08:48 -- common/autotest_common.sh@859 -- # break 00:39:46.453 09:08:48 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:46.453 09:08:48 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:46.453 09:08:48 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:46.453 1+0 records in 00:39:46.453 1+0 records out 00:39:46.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584604 s, 7.0 MB/s 00:39:46.453 09:08:48 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:46.453 09:08:48 -- common/autotest_common.sh@872 -- # size=4096 00:39:46.453 09:08:48 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:46.453 09:08:48 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:46.453 09:08:48 -- common/autotest_common.sh@875 -- # return 0 00:39:46.453 09:08:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:46.453 09:08:48 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:46.453 09:08:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:39:47.019 /dev/nbd10 00:39:47.020 09:08:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:39:47.020 09:08:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:39:47.020 09:08:48 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:39:47.020 09:08:48 -- common/autotest_common.sh@855 -- # local i 00:39:47.020 09:08:48 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:47.020 09:08:48 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:47.020 09:08:48 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:39:47.020 09:08:48 -- common/autotest_common.sh@859 -- # break 00:39:47.020 09:08:48 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:47.020 09:08:48 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:47.020 09:08:48 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:47.020 1+0 records in 00:39:47.020 1+0 records out 00:39:47.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566503 s, 7.2 MB/s 00:39:47.020 09:08:48 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:47.020 09:08:48 -- common/autotest_common.sh@872 -- # size=4096 00:39:47.020 09:08:48 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:47.020 09:08:48 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:47.020 09:08:48 -- common/autotest_common.sh@875 -- # return 0 00:39:47.020 09:08:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:47.020 09:08:48 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:47.020 09:08:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:39:47.278 /dev/nbd11 00:39:47.278 09:08:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:39:47.278 09:08:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:39:47.278 09:08:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:39:47.278 09:08:49 -- common/autotest_common.sh@855 -- # local i 00:39:47.278 09:08:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:47.278 09:08:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:47.278 09:08:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:39:47.278 09:08:49 -- common/autotest_common.sh@859 -- # break 00:39:47.278 09:08:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:47.278 09:08:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:47.278 09:08:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:47.278 1+0 records in 00:39:47.278 1+0 records out 00:39:47.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676759 s, 6.1 MB/s 00:39:47.278 09:08:49 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:47.278 09:08:49 -- common/autotest_common.sh@872 -- # size=4096 00:39:47.278 09:08:49 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:47.278 09:08:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:47.278 09:08:49 -- common/autotest_common.sh@875 -- # return 0 00:39:47.278 09:08:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:47.278 09:08:49 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:47.278 09:08:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:39:47.537 /dev/nbd12 00:39:47.537 09:08:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:39:47.537 09:08:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:39:47.537 09:08:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:39:47.537 09:08:49 -- common/autotest_common.sh@855 -- # local i 00:39:47.537 09:08:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:47.537 09:08:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:47.537 09:08:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:39:47.537 09:08:49 -- common/autotest_common.sh@859 -- # break 00:39:47.537 09:08:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:47.537 09:08:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:47.537 09:08:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:47.537 1+0 records in 00:39:47.537 1+0 records out 00:39:47.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527889 s, 7.8 MB/s 00:39:47.537 09:08:49 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:47.537 09:08:49 -- common/autotest_common.sh@872 -- # size=4096 00:39:47.537 09:08:49 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:47.537 09:08:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:47.537 09:08:49 -- common/autotest_common.sh@875 -- # return 0 00:39:47.537 09:08:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:47.537 09:08:49 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:47.537 09:08:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:39:47.796 /dev/nbd13 00:39:47.796 09:08:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:39:47.796 09:08:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:39:47.796 09:08:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:39:47.796 09:08:49 -- common/autotest_common.sh@855 -- # local i 00:39:47.796 09:08:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:39:47.796 09:08:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:39:47.796 09:08:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:39:47.796 09:08:49 -- common/autotest_common.sh@859 -- # break 00:39:47.796 09:08:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:47.796 09:08:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:47.796 09:08:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:47.796 1+0 records in 00:39:47.796 1+0 records out 00:39:47.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000807934 s, 5.1 MB/s 00:39:47.796 09:08:49 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:47.796 09:08:49 -- common/autotest_common.sh@872 -- # size=4096 00:39:47.796 09:08:49 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:47.796 09:08:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:39:47.796 09:08:49 -- common/autotest_common.sh@875 -- # return 0 00:39:47.796 09:08:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:47.796 09:08:49 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:39:47.796 09:08:49 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:47.796 09:08:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:47.796 09:08:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:48.053 09:08:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:39:48.053 { 00:39:48.053 "nbd_device": "/dev/nbd0", 00:39:48.053 "bdev_name": "nvme0n1" 00:39:48.053 }, 00:39:48.053 { 00:39:48.053 "nbd_device": "/dev/nbd1", 00:39:48.053 "bdev_name": "nvme1n1" 00:39:48.053 }, 00:39:48.053 { 00:39:48.053 "nbd_device": "/dev/nbd10", 00:39:48.053 "bdev_name": "nvme2n1" 00:39:48.053 }, 00:39:48.053 { 00:39:48.053 "nbd_device": "/dev/nbd11", 00:39:48.053 "bdev_name": "nvme2n2" 00:39:48.053 }, 00:39:48.053 { 00:39:48.053 "nbd_device": "/dev/nbd12", 00:39:48.053 "bdev_name": "nvme2n3" 00:39:48.053 }, 00:39:48.053 { 00:39:48.053 "nbd_device": "/dev/nbd13", 00:39:48.053 "bdev_name": "nvme3n1" 00:39:48.053 } 00:39:48.053 ]' 00:39:48.054 09:08:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:48.054 09:08:50 -- bdev/nbd_common.sh@64 -- # echo '[ 00:39:48.054 { 00:39:48.054 "nbd_device": "/dev/nbd0", 00:39:48.054 "bdev_name": "nvme0n1" 00:39:48.054 }, 00:39:48.054 { 00:39:48.054 "nbd_device": "/dev/nbd1", 00:39:48.054 "bdev_name": "nvme1n1" 00:39:48.054 }, 00:39:48.054 { 00:39:48.054 "nbd_device": "/dev/nbd10", 00:39:48.054 "bdev_name": "nvme2n1" 00:39:48.054 }, 00:39:48.054 { 00:39:48.054 "nbd_device": "/dev/nbd11", 00:39:48.054 "bdev_name": "nvme2n2" 00:39:48.054 }, 00:39:48.054 { 00:39:48.054 "nbd_device": "/dev/nbd12", 00:39:48.054 "bdev_name": "nvme2n3" 00:39:48.054 }, 00:39:48.054 { 00:39:48.054 "nbd_device": "/dev/nbd13", 00:39:48.054 "bdev_name": "nvme3n1" 00:39:48.054 } 00:39:48.054 ]' 00:39:48.054 09:08:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:39:48.054 /dev/nbd1 00:39:48.054 /dev/nbd10 00:39:48.054 /dev/nbd11 00:39:48.054 /dev/nbd12 00:39:48.054 /dev/nbd13' 00:39:48.054 09:08:50 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:39:48.054 /dev/nbd1 00:39:48.054 /dev/nbd10 00:39:48.054 /dev/nbd11 00:39:48.054 /dev/nbd12 00:39:48.054 /dev/nbd13' 00:39:48.054 09:08:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:48.054 09:08:50 -- bdev/nbd_common.sh@65 -- # count=6 00:39:48.054 09:08:50 -- bdev/nbd_common.sh@66 -- # echo 6 00:39:48.054 09:08:50 -- bdev/nbd_common.sh@95 -- # count=6 00:39:48.054 09:08:50 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:39:48.054 09:08:50 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:39:48.054 09:08:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:39:48.054 09:08:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:48.054 09:08:50 -- bdev/nbd_common.sh@71 -- # local operation=write 00:39:48.054 09:08:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:48.054 09:08:50 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:39:48.054 09:08:50 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:39:48.054 256+0 records in 00:39:48.054 256+0 records out 00:39:48.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00669633 s, 157 MB/s 00:39:48.054 09:08:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:48.054 09:08:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:39:48.329 256+0 records in 00:39:48.329 256+0 records out 00:39:48.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12899 s, 8.1 MB/s 00:39:48.329 09:08:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:48.329 09:08:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:39:48.329 256+0 records in 00:39:48.329 256+0 records out 00:39:48.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147868 s, 7.1 MB/s 00:39:48.330 09:08:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:48.330 09:08:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:39:48.588 256+0 records in 00:39:48.588 256+0 records out 00:39:48.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.179579 s, 5.8 MB/s 00:39:48.588 09:08:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:48.588 09:08:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:39:48.846 256+0 records in 00:39:48.846 256+0 records out 00:39:48.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165283 s, 6.3 MB/s 00:39:48.846 09:08:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:48.846 09:08:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:39:48.846 256+0 records in 00:39:48.846 256+0 records out 00:39:48.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152546 s, 6.9 MB/s 00:39:48.846 09:08:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:48.846 09:08:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:39:49.104 256+0 records in 00:39:49.104 256+0 records out 00:39:49.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128467 s, 8.2 MB/s 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@51 -- # local i 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:49.104 09:08:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:49.363 09:08:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:49.363 09:08:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:49.363 09:08:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:49.363 09:08:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:49.363 09:08:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:49.363 09:08:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:49.363 09:08:51 -- bdev/nbd_common.sh@41 -- # break 00:39:49.363 09:08:51 -- bdev/nbd_common.sh@45 -- # return 0 00:39:49.363 09:08:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:49.363 09:08:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:39:49.620 09:08:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:49.620 09:08:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:49.620 09:08:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:49.620 09:08:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:49.620 09:08:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:49.620 09:08:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:49.620 09:08:51 -- bdev/nbd_common.sh@41 -- # break 00:39:49.620 09:08:51 -- bdev/nbd_common.sh@45 -- # return 0 00:39:49.620 09:08:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:49.620 09:08:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:39:49.877 09:08:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:39:49.877 09:08:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:39:49.877 09:08:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:39:49.877 09:08:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:49.877 09:08:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:49.877 09:08:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:39:49.877 09:08:51 -- bdev/nbd_common.sh@41 -- # break 00:39:49.877 09:08:51 -- bdev/nbd_common.sh@45 -- # return 0 00:39:49.877 09:08:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:49.877 09:08:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:39:50.135 09:08:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:39:50.135 09:08:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:39:50.135 09:08:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:39:50.136 09:08:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:50.136 09:08:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:50.136 09:08:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:39:50.136 09:08:52 -- bdev/nbd_common.sh@41 -- # break 00:39:50.136 09:08:52 -- bdev/nbd_common.sh@45 -- # return 0 00:39:50.136 09:08:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:50.136 09:08:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:39:50.393 09:08:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:39:50.393 09:08:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:39:50.393 09:08:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:39:50.393 09:08:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:50.393 09:08:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:50.393 09:08:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:39:50.393 09:08:52 -- bdev/nbd_common.sh@41 -- # break 00:39:50.393 09:08:52 -- bdev/nbd_common.sh@45 -- # return 0 00:39:50.393 09:08:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:50.393 09:08:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:39:50.960 09:08:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:39:50.960 09:08:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:39:50.960 09:08:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:39:50.960 09:08:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:50.960 09:08:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:50.960 09:08:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:39:50.960 09:08:52 -- bdev/nbd_common.sh@41 -- # break 00:39:50.960 09:08:52 -- bdev/nbd_common.sh@45 -- # return 0 00:39:50.960 09:08:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:50.960 09:08:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:50.960 09:08:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:50.960 09:08:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:50.960 09:08:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:50.960 09:08:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:51.216 09:08:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:51.216 09:08:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:39:51.216 09:08:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:51.216 09:08:53 -- bdev/nbd_common.sh@65 -- # true 00:39:51.216 09:08:53 -- bdev/nbd_common.sh@65 -- # count=0 00:39:51.216 09:08:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:39:51.216 09:08:53 -- bdev/nbd_common.sh@104 -- # count=0 00:39:51.216 09:08:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:39:51.216 09:08:53 -- bdev/nbd_common.sh@109 -- # return 0 00:39:51.216 09:08:53 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:39:51.216 09:08:53 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:51.217 09:08:53 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:39:51.217 09:08:53 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:39:51.217 09:08:53 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:39:51.217 09:08:53 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:39:51.514 malloc_lvol_verify 00:39:51.514 09:08:53 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:39:51.773 9917f289-6f2e-4666-9c22-3f89895b1094 00:39:51.773 09:08:53 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:39:52.033 a8285a3e-54d9-4eec-8f92-a931beeab606 00:39:52.033 09:08:53 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:39:52.291 /dev/nbd0 00:39:52.291 09:08:54 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:39:52.291 mke2fs 1.46.5 (30-Dec-2021) 00:39:52.291 Discarding device blocks: 0/4096 done 00:39:52.291 Creating filesystem with 4096 1k blocks and 1024 inodes 00:39:52.291 00:39:52.291 Allocating group tables: 0/1 done 00:39:52.291 Writing inode tables: 0/1 done 00:39:52.291 Creating journal (1024 blocks): done 00:39:52.291 Writing superblocks and filesystem accounting information: 0/1 done 00:39:52.291 00:39:52.291 09:08:54 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:39:52.291 09:08:54 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:52.291 09:08:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:52.292 09:08:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:52.292 09:08:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:52.292 09:08:54 -- bdev/nbd_common.sh@51 -- # local i 00:39:52.292 09:08:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:52.292 09:08:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:52.551 09:08:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:52.551 09:08:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:52.551 09:08:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:52.551 09:08:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:52.551 09:08:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:52.551 09:08:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:52.551 09:08:54 -- bdev/nbd_common.sh@41 -- # break 00:39:52.551 09:08:54 -- bdev/nbd_common.sh@45 -- # return 0 00:39:52.551 09:08:54 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:39:52.551 09:08:54 -- bdev/nbd_common.sh@147 -- # return 0 00:39:52.551 09:08:54 -- bdev/blockdev.sh@326 -- # killprocess 74509 00:39:52.551 09:08:54 -- common/autotest_common.sh@936 -- # '[' -z 74509 ']' 00:39:52.551 09:08:54 -- common/autotest_common.sh@940 -- # kill -0 74509 00:39:52.551 09:08:54 -- common/autotest_common.sh@941 -- # uname 00:39:52.551 09:08:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:39:52.551 09:08:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74509 00:39:52.551 09:08:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:39:52.551 09:08:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:39:52.551 09:08:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74509' 00:39:52.551 killing process with pid 74509 00:39:52.551 09:08:54 -- common/autotest_common.sh@955 -- # kill 74509 00:39:52.551 09:08:54 -- common/autotest_common.sh@960 -- # wait 74509 00:39:54.453 09:08:56 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:39:54.453 00:39:54.453 real 0m13.304s 00:39:54.453 user 0m17.771s 00:39:54.453 sys 0m5.107s 00:39:54.453 09:08:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:39:54.453 09:08:56 -- common/autotest_common.sh@10 -- # set +x 00:39:54.453 ************************************ 00:39:54.453 END TEST bdev_nbd 00:39:54.453 ************************************ 00:39:54.453 09:08:56 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:39:54.453 09:08:56 -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:39:54.453 09:08:56 -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:39:54.453 09:08:56 -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:39:54.453 09:08:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:39:54.453 09:08:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:39:54.453 09:08:56 -- common/autotest_common.sh@10 -- # set +x 00:39:54.453 ************************************ 00:39:54.453 START TEST bdev_fio 00:39:54.453 ************************************ 00:39:54.453 09:08:56 -- common/autotest_common.sh@1111 -- # fio_test_suite '' 00:39:54.453 09:08:56 -- bdev/blockdev.sh@331 -- # local env_context 00:39:54.453 09:08:56 -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:39:54.453 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:39:54.453 09:08:56 -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:39:54.453 09:08:56 -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:39:54.453 09:08:56 -- bdev/blockdev.sh@339 -- # echo '' 00:39:54.453 09:08:56 -- bdev/blockdev.sh@339 -- # env_context= 00:39:54.453 09:08:56 -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:39:54.453 09:08:56 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:39:54.453 09:08:56 -- common/autotest_common.sh@1267 -- # local workload=verify 00:39:54.453 09:08:56 -- common/autotest_common.sh@1268 -- # local bdev_type=AIO 00:39:54.453 09:08:56 -- common/autotest_common.sh@1269 -- # local env_context= 00:39:54.453 09:08:56 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:39:54.453 09:08:56 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:39:54.453 09:08:56 -- common/autotest_common.sh@1277 -- # '[' -z verify ']' 00:39:54.453 09:08:56 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:39:54.453 09:08:56 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:39:54.453 09:08:56 -- common/autotest_common.sh@1287 -- # cat 00:39:54.453 09:08:56 -- common/autotest_common.sh@1299 -- # '[' verify == verify ']' 00:39:54.453 09:08:56 -- common/autotest_common.sh@1300 -- # cat 00:39:54.453 09:08:56 -- common/autotest_common.sh@1309 -- # '[' AIO == AIO ']' 00:39:54.453 09:08:56 -- common/autotest_common.sh@1310 -- # /usr/src/fio/fio --version 00:39:54.453 09:08:56 -- common/autotest_common.sh@1310 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:39:54.453 09:08:56 -- common/autotest_common.sh@1311 -- # echo serialize_overlap=1 00:39:54.453 09:08:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:39:54.453 09:08:56 -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:39:54.453 09:08:56 -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:39:54.453 09:08:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:39:54.453 09:08:56 -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:39:54.453 09:08:56 -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:39:54.453 09:08:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:39:54.453 09:08:56 -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:39:54.453 09:08:56 -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:39:54.453 09:08:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:39:54.453 09:08:56 -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n2]' 00:39:54.453 09:08:56 -- bdev/blockdev.sh@343 -- # echo filename=nvme2n2 00:39:54.453 09:08:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:39:54.453 09:08:56 -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n3]' 00:39:54.453 09:08:56 -- bdev/blockdev.sh@343 -- # echo filename=nvme2n3 00:39:54.453 09:08:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:39:54.453 09:08:56 -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:39:54.453 09:08:56 -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:39:54.453 09:08:56 -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:39:54.453 09:08:56 -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:39:54.453 09:08:56 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:39:54.453 09:08:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:39:54.453 09:08:56 -- common/autotest_common.sh@10 -- # set +x 00:39:54.453 ************************************ 00:39:54.453 START TEST bdev_fio_rw_verify 00:39:54.453 ************************************ 00:39:54.453 09:08:56 -- common/autotest_common.sh@1111 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:39:54.453 09:08:56 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:39:54.453 09:08:56 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:39:54.453 09:08:56 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:54.453 09:08:56 -- common/autotest_common.sh@1325 -- # local sanitizers 00:39:54.453 09:08:56 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:54.453 09:08:56 -- common/autotest_common.sh@1327 -- # shift 00:39:54.453 09:08:56 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:39:54.453 09:08:56 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:39:54.453 09:08:56 -- common/autotest_common.sh@1331 -- # grep libasan 00:39:54.453 09:08:56 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:54.453 09:08:56 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:39:54.453 09:08:56 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:54.453 09:08:56 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:54.453 09:08:56 -- common/autotest_common.sh@1333 -- # break 00:39:54.453 09:08:56 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:54.453 09:08:56 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:39:54.712 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:54.712 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:54.712 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:54.712 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:54.712 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:54.712 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:54.712 fio-3.35 00:39:54.712 Starting 6 threads 00:40:06.910 00:40:06.910 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74961: Thu Apr 18 09:09:07 2024 00:40:06.910 read: IOPS=29.8k, BW=116MiB/s (122MB/s)(1162MiB/10001msec) 00:40:06.910 slat (usec): min=2, max=1128, avg= 6.31, stdev= 4.73 00:40:06.910 clat (usec): min=76, max=13245, avg=626.05, stdev=273.32 00:40:06.910 lat (usec): min=82, max=13252, avg=632.35, stdev=273.99 00:40:06.910 clat percentiles (usec): 00:40:06.910 | 50.000th=[ 619], 99.000th=[ 1369], 99.900th=[ 2376], 99.990th=[ 4621], 00:40:06.910 | 99.999th=[13173] 00:40:06.910 write: IOPS=30.1k, BW=118MiB/s (123MB/s)(1178MiB/10001msec); 0 zone resets 00:40:06.910 slat (usec): min=7, max=5252, avg=26.28, stdev=34.06 00:40:06.910 clat (usec): min=102, max=13417, avg=704.83, stdev=292.90 00:40:06.911 lat (usec): min=120, max=13438, avg=731.10, stdev=296.59 00:40:06.911 clat percentiles (usec): 00:40:06.911 | 50.000th=[ 693], 99.000th=[ 1500], 99.900th=[ 2737], 99.990th=[ 4752], 00:40:06.911 | 99.999th=[13304] 00:40:06.911 bw ( KiB/s): min=96432, max=153480, per=100.00%, avg=121631.16, stdev=2990.63, samples=114 00:40:06.911 iops : min=24108, max=38370, avg=30407.74, stdev=747.65, samples=114 00:40:06.911 lat (usec) : 100=0.01%, 250=4.03%, 500=23.90%, 750=37.09%, 1000=26.15% 00:40:06.911 lat (msec) : 2=8.62%, 4=0.18%, 10=0.02%, 20=0.01% 00:40:06.911 cpu : usr=53.73%, sys=31.20%, ctx=9077, majf=0, minf=25259 00:40:06.911 IO depths : 1=12.0%, 2=24.6%, 4=50.4%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:06.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.911 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.911 issued rwts: total=297570,301492,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:06.911 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:06.911 00:40:06.911 Run status group 0 (all jobs): 00:40:06.911 READ: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=1162MiB (1219MB), run=10001-10001msec 00:40:06.911 WRITE: bw=118MiB/s (123MB/s), 118MiB/s-118MiB/s (123MB/s-123MB/s), io=1178MiB (1235MB), run=10001-10001msec 00:40:07.168 ----------------------------------------------------- 00:40:07.168 Suppressions used: 00:40:07.168 count bytes template 00:40:07.168 6 48 /usr/src/fio/parse.c 00:40:07.168 3710 356160 /usr/src/fio/iolog.c 00:40:07.168 1 8 libtcmalloc_minimal.so 00:40:07.168 1 904 libcrypto.so 00:40:07.168 ----------------------------------------------------- 00:40:07.168 00:40:07.168 00:40:07.168 real 0m12.859s 00:40:07.168 user 0m34.599s 00:40:07.168 sys 0m19.106s 00:40:07.168 09:09:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:40:07.168 09:09:09 -- common/autotest_common.sh@10 -- # set +x 00:40:07.168 ************************************ 00:40:07.168 END TEST bdev_fio_rw_verify 00:40:07.168 ************************************ 00:40:07.168 09:09:09 -- bdev/blockdev.sh@350 -- # rm -f 00:40:07.168 09:09:09 -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:40:07.168 09:09:09 -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:40:07.168 09:09:09 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:40:07.168 09:09:09 -- common/autotest_common.sh@1267 -- # local workload=trim 00:40:07.168 09:09:09 -- common/autotest_common.sh@1268 -- # local bdev_type= 00:40:07.426 09:09:09 -- common/autotest_common.sh@1269 -- # local env_context= 00:40:07.426 09:09:09 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:40:07.426 09:09:09 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:40:07.426 09:09:09 -- common/autotest_common.sh@1277 -- # '[' -z trim ']' 00:40:07.426 09:09:09 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:40:07.426 09:09:09 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:40:07.426 09:09:09 -- common/autotest_common.sh@1287 -- # cat 00:40:07.426 09:09:09 -- common/autotest_common.sh@1299 -- # '[' trim == verify ']' 00:40:07.426 09:09:09 -- common/autotest_common.sh@1314 -- # '[' trim == trim ']' 00:40:07.426 09:09:09 -- common/autotest_common.sh@1315 -- # echo rw=trimwrite 00:40:07.426 09:09:09 -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:40:07.427 09:09:09 -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "87f68c92-9ee6-43c9-8e4b-ce6d7a1ec909"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "87f68c92-9ee6-43c9-8e4b-ce6d7a1ec909",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "041de362-981f-427b-a70b-abdd6e25b046"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "041de362-981f-427b-a70b-abdd6e25b046",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "f4f3c8ab-4ff7-46f9-8e69-e6847fd9def3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f4f3c8ab-4ff7-46f9-8e69-e6847fd9def3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "ff3ad11c-7f09-450a-a34d-3d11e2eaef22"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ff3ad11c-7f09-450a-a34d-3d11e2eaef22",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "26efc298-34d0-406f-8482-070ec3398354"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "26efc298-34d0-406f-8482-070ec3398354",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "1b8a846f-41a3-44cc-a20d-1f430d5fcacf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1b8a846f-41a3-44cc-a20d-1f430d5fcacf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:40:07.427 09:09:09 -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:40:07.427 09:09:09 -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:40:07.427 09:09:09 -- bdev/blockdev.sh@362 -- # popd 00:40:07.427 /home/vagrant/spdk_repo/spdk 00:40:07.427 09:09:09 -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:40:07.427 09:09:09 -- bdev/blockdev.sh@364 -- # return 0 00:40:07.427 00:40:07.427 real 0m13.128s 00:40:07.427 user 0m34.711s 00:40:07.427 sys 0m19.245s 00:40:07.427 09:09:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:40:07.427 09:09:09 -- common/autotest_common.sh@10 -- # set +x 00:40:07.427 ************************************ 00:40:07.427 END TEST bdev_fio 00:40:07.427 ************************************ 00:40:07.427 09:09:09 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:40:07.427 09:09:09 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:40:07.427 09:09:09 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:40:07.427 09:09:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:40:07.427 09:09:09 -- common/autotest_common.sh@10 -- # set +x 00:40:07.427 ************************************ 00:40:07.427 START TEST bdev_verify 00:40:07.427 ************************************ 00:40:07.427 09:09:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:40:07.685 [2024-04-18 09:09:09.586397] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:40:07.685 [2024-04-18 09:09:09.586754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75134 ] 00:40:07.685 [2024-04-18 09:09:09.774192] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:08.261 [2024-04-18 09:09:10.077849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:08.261 [2024-04-18 09:09:10.077883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:08.827 Running I/O for 5 seconds... 00:40:14.099 00:40:14.099 Latency(us) 00:40:14.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:14.099 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:14.099 Verification LBA range: start 0x0 length 0xa0000 00:40:14.099 nvme0n1 : 5.04 1649.94 6.45 0.00 0.00 77429.13 14979.66 66909.14 00:40:14.099 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:14.099 Verification LBA range: start 0xa0000 length 0xa0000 00:40:14.099 nvme0n1 : 5.05 1623.17 6.34 0.00 0.00 78723.22 11858.90 75397.61 00:40:14.099 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:14.099 Verification LBA range: start 0x0 length 0xbd0bd 00:40:14.099 nvme1n1 : 5.04 2852.98 11.14 0.00 0.00 44635.52 4556.31 71403.03 00:40:14.099 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:14.099 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:40:14.099 nvme1n1 : 5.04 2774.51 10.84 0.00 0.00 45949.06 5617.37 68407.10 00:40:14.099 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:14.099 Verification LBA range: start 0x0 length 0x80000 00:40:14.099 nvme2n1 : 5.05 1674.10 6.54 0.00 0.00 75942.05 8738.13 67907.78 00:40:14.099 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:14.099 Verification LBA range: start 0x80000 length 0x80000 00:40:14.099 nvme2n1 : 5.04 1624.68 6.35 0.00 0.00 78263.58 9237.46 69905.07 00:40:14.099 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:14.099 Verification LBA range: start 0x0 length 0x80000 00:40:14.099 nvme2n2 : 5.06 1670.77 6.53 0.00 0.00 75936.83 7645.87 76895.57 00:40:14.099 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:14.099 Verification LBA range: start 0x80000 length 0x80000 00:40:14.099 nvme2n2 : 5.05 1622.35 6.34 0.00 0.00 78220.67 12732.71 66909.14 00:40:14.099 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:14.099 Verification LBA range: start 0x0 length 0x80000 00:40:14.099 nvme2n3 : 5.06 1669.90 6.52 0.00 0.00 75814.46 5274.09 71403.03 00:40:14.099 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:14.099 Verification LBA range: start 0x80000 length 0x80000 00:40:14.099 nvme2n3 : 5.05 1621.72 6.33 0.00 0.00 78129.27 11921.31 75397.61 00:40:14.099 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:14.099 Verification LBA range: start 0x0 length 0x20000 00:40:14.099 nvme3n1 : 5.06 1669.08 6.52 0.00 0.00 75722.35 3151.97 71902.35 00:40:14.099 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:14.099 Verification LBA range: start 0x20000 length 0x20000 00:40:14.099 nvme3n1 : 5.06 1642.80 6.42 0.00 0.00 77011.33 1357.53 82388.11 00:40:14.099 =================================================================================================================== 00:40:14.099 Total : 22096.00 86.31 0.00 0.00 69016.51 1357.53 82388.11 00:40:15.474 00:40:15.474 real 0m7.726s 00:40:15.474 user 0m11.771s 00:40:15.474 sys 0m1.999s 00:40:15.474 09:09:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:40:15.474 ************************************ 00:40:15.474 END TEST bdev_verify 00:40:15.474 ************************************ 00:40:15.474 09:09:17 -- common/autotest_common.sh@10 -- # set +x 00:40:15.474 09:09:17 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:40:15.474 09:09:17 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:40:15.474 09:09:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:40:15.474 09:09:17 -- common/autotest_common.sh@10 -- # set +x 00:40:15.474 ************************************ 00:40:15.474 START TEST bdev_verify_big_io 00:40:15.474 ************************************ 00:40:15.474 09:09:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:40:15.474 [2024-04-18 09:09:17.449521] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:40:15.474 [2024-04-18 09:09:17.449919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75248 ] 00:40:15.733 [2024-04-18 09:09:17.629351] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:15.993 [2024-04-18 09:09:17.903900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:15.993 [2024-04-18 09:09:17.903903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:16.560 Running I/O for 5 seconds... 00:40:23.173 00:40:23.173 Latency(us) 00:40:23.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:23.173 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:23.173 Verification LBA range: start 0x0 length 0xa000 00:40:23.173 nvme0n1 : 5.86 117.46 7.34 0.00 0.00 1051387.59 179755.89 1030600.41 00:40:23.173 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:23.173 Verification LBA range: start 0xa000 length 0xa000 00:40:23.173 nvme0n1 : 5.97 116.65 7.29 0.00 0.00 1059789.67 31457.28 1653754.15 00:40:23.173 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:23.173 Verification LBA range: start 0x0 length 0xbd0b 00:40:23.173 nvme1n1 : 6.00 122.72 7.67 0.00 0.00 975778.97 13356.86 1094513.62 00:40:23.173 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:23.173 Verification LBA range: start 0xbd0b length 0xbd0b 00:40:23.173 nvme1n1 : 5.96 136.97 8.56 0.00 0.00 893635.83 24217.11 1797558.86 00:40:23.173 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:23.173 Verification LBA range: start 0x0 length 0x8000 00:40:23.173 nvme2n1 : 5.86 147.38 9.21 0.00 0.00 786712.44 52678.46 882801.13 00:40:23.173 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:23.173 Verification LBA range: start 0x8000 length 0x8000 00:40:23.173 nvme2n1 : 5.97 163.52 10.22 0.00 0.00 720628.56 28086.86 798915.05 00:40:23.173 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:23.173 Verification LBA range: start 0x0 length 0x8000 00:40:23.173 nvme2n2 : 5.98 107.05 6.69 0.00 0.00 1034859.57 29584.82 1877450.36 00:40:23.173 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:23.173 Verification LBA range: start 0x8000 length 0x8000 00:40:23.173 nvme2n2 : 5.96 131.52 8.22 0.00 0.00 867946.78 14854.83 1685710.75 00:40:23.173 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:23.173 Verification LBA range: start 0x0 length 0x8000 00:40:23.173 nvme2n3 : 6.00 127.98 8.00 0.00 0.00 861398.15 14230.67 1669732.45 00:40:23.173 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:23.173 Verification LBA range: start 0x8000 length 0x8000 00:40:23.173 nvme2n3 : 5.97 125.93 7.87 0.00 0.00 880119.02 20097.71 1877450.36 00:40:23.173 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:23.173 Verification LBA range: start 0x0 length 0x2000 00:40:23.173 nvme3n1 : 6.00 138.76 8.67 0.00 0.00 771286.48 4400.27 1805548.01 00:40:23.173 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:23.173 Verification LBA range: start 0x2000 length 0x2000 00:40:23.173 nvme3n1 : 5.96 115.35 7.21 0.00 0.00 932592.43 10922.67 2348810.24 00:40:23.173 =================================================================================================================== 00:40:23.173 Total : 1551.30 96.96 0.00 0.00 891874.36 4400.27 2348810.24 00:40:24.549 00:40:24.549 real 0m9.112s 00:40:24.549 user 0m16.220s 00:40:24.549 sys 0m0.613s 00:40:24.549 ************************************ 00:40:24.549 END TEST bdev_verify_big_io 00:40:24.549 ************************************ 00:40:24.549 09:09:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:40:24.549 09:09:26 -- common/autotest_common.sh@10 -- # set +x 00:40:24.549 09:09:26 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:24.549 09:09:26 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:40:24.549 09:09:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:40:24.549 09:09:26 -- common/autotest_common.sh@10 -- # set +x 00:40:24.549 ************************************ 00:40:24.549 START TEST bdev_write_zeroes 00:40:24.549 ************************************ 00:40:24.549 09:09:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:24.808 [2024-04-18 09:09:26.651609] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:40:24.808 [2024-04-18 09:09:26.651943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75373 ] 00:40:24.808 [2024-04-18 09:09:26.816972] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:25.067 [2024-04-18 09:09:27.059060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:25.636 Running I/O for 1 seconds... 00:40:26.572 00:40:26.572 Latency(us) 00:40:26.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:26.572 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:26.572 nvme0n1 : 1.02 9811.35 38.33 0.00 0.00 13033.73 7521.04 20597.03 00:40:26.572 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:26.572 nvme1n1 : 1.03 16775.35 65.53 0.00 0.00 7615.75 3963.37 13918.60 00:40:26.572 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:26.572 nvme2n1 : 1.02 9768.11 38.16 0.00 0.00 12989.08 7365.00 19972.88 00:40:26.572 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:26.572 nvme2n2 : 1.02 9753.51 38.10 0.00 0.00 13003.19 7614.66 19972.88 00:40:26.572 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:26.572 nvme2n3 : 1.03 9739.23 38.04 0.00 0.00 13013.41 7614.66 20472.20 00:40:26.572 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:26.572 nvme3n1 : 1.03 9725.08 37.99 0.00 0.00 13022.57 7614.66 20846.69 00:40:26.572 =================================================================================================================== 00:40:26.572 Total : 65572.63 256.14 0.00 0.00 11626.60 3963.37 20846.69 00:40:28.552 00:40:28.552 real 0m3.555s 00:40:28.552 user 0m2.682s 00:40:28.552 sys 0m0.695s 00:40:28.552 09:09:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:40:28.552 09:09:30 -- common/autotest_common.sh@10 -- # set +x 00:40:28.552 ************************************ 00:40:28.552 END TEST bdev_write_zeroes 00:40:28.552 ************************************ 00:40:28.552 09:09:30 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:28.552 09:09:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:40:28.552 09:09:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:40:28.552 09:09:30 -- common/autotest_common.sh@10 -- # set +x 00:40:28.552 ************************************ 00:40:28.552 START TEST bdev_json_nonenclosed 00:40:28.552 ************************************ 00:40:28.552 09:09:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:28.552 [2024-04-18 09:09:30.373793] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:40:28.552 [2024-04-18 09:09:30.374205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75444 ] 00:40:28.552 [2024-04-18 09:09:30.558479] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:28.811 [2024-04-18 09:09:30.821966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:28.811 [2024-04-18 09:09:30.822310] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:40:28.811 [2024-04-18 09:09:30.822566] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:40:28.811 [2024-04-18 09:09:30.822663] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:29.378 00:40:29.378 real 0m1.105s 00:40:29.378 user 0m0.789s 00:40:29.378 sys 0m0.205s 00:40:29.378 09:09:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:40:29.378 09:09:31 -- common/autotest_common.sh@10 -- # set +x 00:40:29.378 ************************************ 00:40:29.378 END TEST bdev_json_nonenclosed 00:40:29.378 ************************************ 00:40:29.378 09:09:31 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:29.378 09:09:31 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:40:29.378 09:09:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:40:29.378 09:09:31 -- common/autotest_common.sh@10 -- # set +x 00:40:29.636 ************************************ 00:40:29.636 START TEST bdev_json_nonarray 00:40:29.636 ************************************ 00:40:29.636 09:09:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:29.636 [2024-04-18 09:09:31.616631] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:40:29.637 [2024-04-18 09:09:31.617003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75479 ] 00:40:29.894 [2024-04-18 09:09:31.804184] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.151 [2024-04-18 09:09:32.129156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:30.151 [2024-04-18 09:09:32.129486] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:40:30.151 [2024-04-18 09:09:32.129712] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:40:30.151 [2024-04-18 09:09:32.129904] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:30.716 00:40:30.716 real 0m1.144s 00:40:30.716 user 0m0.850s 00:40:30.717 sys 0m0.183s 00:40:30.717 09:09:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:40:30.717 09:09:32 -- common/autotest_common.sh@10 -- # set +x 00:40:30.717 ************************************ 00:40:30.717 END TEST bdev_json_nonarray 00:40:30.717 ************************************ 00:40:30.717 09:09:32 -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:40:30.717 09:09:32 -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:40:30.717 09:09:32 -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:40:30.717 09:09:32 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:40:30.717 09:09:32 -- bdev/blockdev.sh@811 -- # cleanup 00:40:30.717 09:09:32 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:40:30.717 09:09:32 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:40:30.717 09:09:32 -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:40:30.717 09:09:32 -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:40:30.717 09:09:32 -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:40:30.717 09:09:32 -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:40:30.717 09:09:32 -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:40:31.281 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:34.561 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:40:34.561 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:40:34.561 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:40:34.561 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:40:34.561 ************************************ 00:40:34.561 END TEST blockdev_xnvme 00:40:34.561 ************************************ 00:40:34.561 00:40:34.561 real 1m10.782s 00:40:34.561 user 1m47.051s 00:40:34.561 sys 0m34.538s 00:40:34.561 09:09:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:40:34.561 09:09:36 -- common/autotest_common.sh@10 -- # set +x 00:40:34.561 09:09:36 -- spdk/autotest.sh@249 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:40:34.561 09:09:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:40:34.561 09:09:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:40:34.561 09:09:36 -- common/autotest_common.sh@10 -- # set +x 00:40:34.561 ************************************ 00:40:34.561 START TEST ublk 00:40:34.561 ************************************ 00:40:34.561 09:09:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:40:34.561 * Looking for test storage... 00:40:34.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:40:34.561 09:09:36 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:40:34.561 09:09:36 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:40:34.561 09:09:36 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:40:34.561 09:09:36 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:40:34.561 09:09:36 -- lvol/common.sh@9 -- # AIO_BS=4096 00:40:34.561 09:09:36 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:40:34.561 09:09:36 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:40:34.561 09:09:36 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:40:34.561 09:09:36 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:40:34.561 09:09:36 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:40:34.561 09:09:36 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:40:34.561 09:09:36 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:40:34.561 09:09:36 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:40:34.561 09:09:36 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:40:34.561 09:09:36 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:40:34.561 09:09:36 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:40:34.561 09:09:36 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:40:34.561 09:09:36 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:40:34.561 09:09:36 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:40:34.561 09:09:36 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:40:34.561 09:09:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:40:34.561 09:09:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:40:34.561 09:09:36 -- common/autotest_common.sh@10 -- # set +x 00:40:34.561 ************************************ 00:40:34.561 START TEST test_save_ublk_config 00:40:34.561 ************************************ 00:40:34.561 09:09:36 -- common/autotest_common.sh@1111 -- # test_save_config 00:40:34.561 09:09:36 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:40:34.561 09:09:36 -- ublk/ublk.sh@103 -- # tgtpid=75775 00:40:34.561 09:09:36 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:40:34.561 09:09:36 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:40:34.561 09:09:36 -- ublk/ublk.sh@106 -- # waitforlisten 75775 00:40:34.561 09:09:36 -- common/autotest_common.sh@817 -- # '[' -z 75775 ']' 00:40:34.561 09:09:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:34.561 09:09:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:40:34.561 09:09:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:34.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:34.561 09:09:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:40:34.561 09:09:36 -- common/autotest_common.sh@10 -- # set +x 00:40:34.820 [2024-04-18 09:09:36.692982] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:40:34.820 [2024-04-18 09:09:36.693358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75775 ] 00:40:34.820 [2024-04-18 09:09:36.887959] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:35.387 [2024-04-18 09:09:37.209444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:36.322 09:09:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:40:36.322 09:09:38 -- common/autotest_common.sh@850 -- # return 0 00:40:36.322 09:09:38 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:40:36.322 09:09:38 -- ublk/ublk.sh@108 -- # rpc_cmd 00:40:36.322 09:09:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:40:36.322 09:09:38 -- common/autotest_common.sh@10 -- # set +x 00:40:36.322 [2024-04-18 09:09:38.288989] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:40:36.322 malloc0 00:40:36.322 [2024-04-18 09:09:38.395763] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:40:36.322 [2024-04-18 09:09:38.395880] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:40:36.322 [2024-04-18 09:09:38.395892] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:40:36.322 [2024-04-18 09:09:38.395907] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:40:36.322 [2024-04-18 09:09:38.412566] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:40:36.322 [2024-04-18 09:09:38.412626] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:40:36.580 [2024-04-18 09:09:38.438394] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:40:36.580 [2024-04-18 09:09:38.438560] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:40:36.580 [2024-04-18 09:09:38.476815] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:40:36.580 0 00:40:36.580 09:09:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:40:36.580 09:09:38 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:40:36.580 09:09:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:40:36.580 09:09:38 -- common/autotest_common.sh@10 -- # set +x 00:40:36.839 09:09:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:40:36.839 09:09:38 -- ublk/ublk.sh@115 -- # config='{ 00:40:36.839 "subsystems": [ 00:40:36.839 { 00:40:36.839 "subsystem": "keyring", 00:40:36.839 "config": [] 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "subsystem": "iobuf", 00:40:36.839 "config": [ 00:40:36.839 { 00:40:36.839 "method": "iobuf_set_options", 00:40:36.839 "params": { 00:40:36.839 "small_pool_count": 8192, 00:40:36.839 "large_pool_count": 1024, 00:40:36.839 "small_bufsize": 8192, 00:40:36.839 "large_bufsize": 135168 00:40:36.839 } 00:40:36.839 } 00:40:36.839 ] 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "subsystem": "sock", 00:40:36.839 "config": [ 00:40:36.839 { 00:40:36.839 "method": "sock_impl_set_options", 00:40:36.839 "params": { 00:40:36.839 "impl_name": "posix", 00:40:36.839 "recv_buf_size": 2097152, 00:40:36.839 "send_buf_size": 2097152, 00:40:36.839 "enable_recv_pipe": true, 00:40:36.839 "enable_quickack": false, 00:40:36.839 "enable_placement_id": 0, 00:40:36.839 "enable_zerocopy_send_server": true, 00:40:36.839 "enable_zerocopy_send_client": false, 00:40:36.839 "zerocopy_threshold": 0, 00:40:36.839 "tls_version": 0, 00:40:36.839 "enable_ktls": false 00:40:36.839 } 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "method": "sock_impl_set_options", 00:40:36.839 "params": { 00:40:36.839 "impl_name": "ssl", 00:40:36.839 "recv_buf_size": 4096, 00:40:36.839 "send_buf_size": 4096, 00:40:36.839 "enable_recv_pipe": true, 00:40:36.839 "enable_quickack": false, 00:40:36.839 "enable_placement_id": 0, 00:40:36.839 "enable_zerocopy_send_server": true, 00:40:36.839 "enable_zerocopy_send_client": false, 00:40:36.839 "zerocopy_threshold": 0, 00:40:36.839 "tls_version": 0, 00:40:36.839 "enable_ktls": false 00:40:36.839 } 00:40:36.839 } 00:40:36.839 ] 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "subsystem": "vmd", 00:40:36.839 "config": [] 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "subsystem": "accel", 00:40:36.839 "config": [ 00:40:36.839 { 00:40:36.839 "method": "accel_set_options", 00:40:36.839 "params": { 00:40:36.839 "small_cache_size": 128, 00:40:36.839 "large_cache_size": 16, 00:40:36.839 "task_count": 2048, 00:40:36.839 "sequence_count": 2048, 00:40:36.839 "buf_count": 2048 00:40:36.839 } 00:40:36.839 } 00:40:36.839 ] 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "subsystem": "bdev", 00:40:36.839 "config": [ 00:40:36.839 { 00:40:36.839 "method": "bdev_set_options", 00:40:36.839 "params": { 00:40:36.839 "bdev_io_pool_size": 65535, 00:40:36.839 "bdev_io_cache_size": 256, 00:40:36.839 "bdev_auto_examine": true, 00:40:36.839 "iobuf_small_cache_size": 128, 00:40:36.839 "iobuf_large_cache_size": 16 00:40:36.839 } 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "method": "bdev_raid_set_options", 00:40:36.839 "params": { 00:40:36.839 "process_window_size_kb": 1024 00:40:36.839 } 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "method": "bdev_iscsi_set_options", 00:40:36.839 "params": { 00:40:36.839 "timeout_sec": 30 00:40:36.839 } 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "method": "bdev_nvme_set_options", 00:40:36.839 "params": { 00:40:36.839 "action_on_timeout": "none", 00:40:36.839 "timeout_us": 0, 00:40:36.839 "timeout_admin_us": 0, 00:40:36.839 "keep_alive_timeout_ms": 10000, 00:40:36.839 "arbitration_burst": 0, 00:40:36.839 "low_priority_weight": 0, 00:40:36.839 "medium_priority_weight": 0, 00:40:36.839 "high_priority_weight": 0, 00:40:36.839 "nvme_adminq_poll_period_us": 10000, 00:40:36.839 "nvme_ioq_poll_period_us": 0, 00:40:36.839 "io_queue_requests": 0, 00:40:36.839 "delay_cmd_submit": true, 00:40:36.839 "transport_retry_count": 4, 00:40:36.839 "bdev_retry_count": 3, 00:40:36.839 "transport_ack_timeout": 0, 00:40:36.839 "ctrlr_loss_timeout_sec": 0, 00:40:36.839 "reconnect_delay_sec": 0, 00:40:36.839 "fast_io_fail_timeout_sec": 0, 00:40:36.839 "disable_auto_failback": false, 00:40:36.839 "generate_uuids": false, 00:40:36.839 "transport_tos": 0, 00:40:36.839 "nvme_error_stat": false, 00:40:36.839 "rdma_srq_size": 0, 00:40:36.839 "io_path_stat": false, 00:40:36.839 "allow_accel_sequence": false, 00:40:36.839 "rdma_max_cq_size": 0, 00:40:36.839 "rdma_cm_event_timeout_ms": 0, 00:40:36.839 "dhchap_digests": [ 00:40:36.839 "sha256", 00:40:36.839 "sha384", 00:40:36.839 "sha512" 00:40:36.839 ], 00:40:36.839 "dhchap_dhgroups": [ 00:40:36.839 "null", 00:40:36.839 "ffdhe2048", 00:40:36.839 "ffdhe3072", 00:40:36.839 "ffdhe4096", 00:40:36.839 "ffdhe6144", 00:40:36.839 "ffdhe8192" 00:40:36.839 ] 00:40:36.839 } 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "method": "bdev_nvme_set_hotplug", 00:40:36.839 "params": { 00:40:36.839 "period_us": 100000, 00:40:36.839 "enable": false 00:40:36.839 } 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "method": "bdev_malloc_create", 00:40:36.839 "params": { 00:40:36.839 "name": "malloc0", 00:40:36.839 "num_blocks": 8192, 00:40:36.839 "block_size": 4096, 00:40:36.839 "physical_block_size": 4096, 00:40:36.839 "uuid": "1f2d19bd-ed5e-4bee-9763-4fe18dd183c9", 00:40:36.839 "optimal_io_boundary": 0 00:40:36.839 } 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "method": "bdev_wait_for_examine" 00:40:36.839 } 00:40:36.839 ] 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "subsystem": "scsi", 00:40:36.839 "config": null 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "subsystem": "scheduler", 00:40:36.839 "config": [ 00:40:36.839 { 00:40:36.839 "method": "framework_set_scheduler", 00:40:36.839 "params": { 00:40:36.839 "name": "static" 00:40:36.839 } 00:40:36.839 } 00:40:36.839 ] 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "subsystem": "vhost_scsi", 00:40:36.839 "config": [] 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "subsystem": "vhost_blk", 00:40:36.839 "config": [] 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "subsystem": "ublk", 00:40:36.839 "config": [ 00:40:36.839 { 00:40:36.839 "method": "ublk_create_target", 00:40:36.839 "params": { 00:40:36.839 "cpumask": "1" 00:40:36.839 } 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "method": "ublk_start_disk", 00:40:36.839 "params": { 00:40:36.839 "bdev_name": "malloc0", 00:40:36.839 "ublk_id": 0, 00:40:36.839 "num_queues": 1, 00:40:36.839 "queue_depth": 128 00:40:36.839 } 00:40:36.839 } 00:40:36.839 ] 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "subsystem": "nbd", 00:40:36.839 "config": [] 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "subsystem": "nvmf", 00:40:36.839 "config": [ 00:40:36.839 { 00:40:36.839 "method": "nvmf_set_config", 00:40:36.839 "params": { 00:40:36.839 "discovery_filter": "match_any", 00:40:36.839 "admin_cmd_passthru": { 00:40:36.839 "identify_ctrlr": false 00:40:36.839 } 00:40:36.839 } 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "method": "nvmf_set_max_subsystems", 00:40:36.839 "params": { 00:40:36.839 "max_subsystems": 1024 00:40:36.839 } 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "method": "nvmf_set_crdt", 00:40:36.839 "params": { 00:40:36.839 "crdt1": 0, 00:40:36.839 "crdt2": 0, 00:40:36.839 "crdt3": 0 00:40:36.839 } 00:40:36.839 } 00:40:36.839 ] 00:40:36.839 }, 00:40:36.839 { 00:40:36.839 "subsystem": "iscsi", 00:40:36.839 "config": [ 00:40:36.839 { 00:40:36.839 "method": "iscsi_set_options", 00:40:36.839 "params": { 00:40:36.839 "node_base": "iqn.2016-06.io.spdk", 00:40:36.839 "max_sessions": 128, 00:40:36.839 "max_connections_per_session": 2, 00:40:36.839 "max_queue_depth": 64, 00:40:36.839 "default_time2wait": 2, 00:40:36.839 "default_time2retain": 20, 00:40:36.839 "first_burst_length": 8192, 00:40:36.839 "immediate_data": true, 00:40:36.839 "allow_duplicated_isid": false, 00:40:36.839 "error_recovery_level": 0, 00:40:36.839 "nop_timeout": 60, 00:40:36.839 "nop_in_interval": 30, 00:40:36.839 "disable_chap": false, 00:40:36.839 "require_chap": false, 00:40:36.839 "mutual_chap": false, 00:40:36.839 "chap_group": 0, 00:40:36.839 "max_large_datain_per_connection": 64, 00:40:36.839 "max_r2t_per_connection": 4, 00:40:36.839 "pdu_pool_size": 36864, 00:40:36.839 "immediate_data_pool_size": 16384, 00:40:36.839 "data_out_pool_size": 2048 00:40:36.839 } 00:40:36.839 } 00:40:36.839 ] 00:40:36.839 } 00:40:36.839 ] 00:40:36.839 }' 00:40:36.839 09:09:38 -- ublk/ublk.sh@116 -- # killprocess 75775 00:40:36.839 09:09:38 -- common/autotest_common.sh@936 -- # '[' -z 75775 ']' 00:40:36.839 09:09:38 -- common/autotest_common.sh@940 -- # kill -0 75775 00:40:36.839 09:09:38 -- common/autotest_common.sh@941 -- # uname 00:40:36.839 09:09:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:40:36.839 09:09:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75775 00:40:36.840 09:09:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:40:36.840 09:09:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:40:36.840 09:09:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75775' 00:40:36.840 killing process with pid 75775 00:40:36.840 09:09:38 -- common/autotest_common.sh@955 -- # kill 75775 00:40:36.840 09:09:38 -- common/autotest_common.sh@960 -- # wait 75775 00:40:38.766 [2024-04-18 09:09:40.393263] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:40:38.766 [2024-04-18 09:09:40.437441] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:40:38.766 [2024-04-18 09:09:40.437766] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:40:38.766 [2024-04-18 09:09:40.458444] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:40:38.766 [2024-04-18 09:09:40.458670] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:40:38.766 [2024-04-18 09:09:40.458718] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:40:38.766 [2024-04-18 09:09:40.458840] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:40:38.766 [2024-04-18 09:09:40.459103] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:40:40.144 09:09:42 -- ublk/ublk.sh@119 -- # tgtpid=75852 00:40:40.144 09:09:42 -- ublk/ublk.sh@121 -- # waitforlisten 75852 00:40:40.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:40.144 09:09:42 -- common/autotest_common.sh@817 -- # '[' -z 75852 ']' 00:40:40.144 09:09:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:40.144 09:09:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:40:40.144 09:09:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:40.144 09:09:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:40:40.144 09:09:42 -- common/autotest_common.sh@10 -- # set +x 00:40:40.144 09:09:42 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:40:40.144 09:09:42 -- ublk/ublk.sh@118 -- # echo '{ 00:40:40.144 "subsystems": [ 00:40:40.144 { 00:40:40.144 "subsystem": "keyring", 00:40:40.144 "config": [] 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "subsystem": "iobuf", 00:40:40.144 "config": [ 00:40:40.144 { 00:40:40.144 "method": "iobuf_set_options", 00:40:40.144 "params": { 00:40:40.144 "small_pool_count": 8192, 00:40:40.144 "large_pool_count": 1024, 00:40:40.144 "small_bufsize": 8192, 00:40:40.144 "large_bufsize": 135168 00:40:40.144 } 00:40:40.144 } 00:40:40.144 ] 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "subsystem": "sock", 00:40:40.144 "config": [ 00:40:40.144 { 00:40:40.144 "method": "sock_impl_set_options", 00:40:40.144 "params": { 00:40:40.144 "impl_name": "posix", 00:40:40.144 "recv_buf_size": 2097152, 00:40:40.144 "send_buf_size": 2097152, 00:40:40.144 "enable_recv_pipe": true, 00:40:40.144 "enable_quickack": false, 00:40:40.144 "enable_placement_id": 0, 00:40:40.144 "enable_zerocopy_send_server": true, 00:40:40.144 "enable_zerocopy_send_client": false, 00:40:40.144 "zerocopy_threshold": 0, 00:40:40.144 "tls_version": 0, 00:40:40.144 "enable_ktls": false 00:40:40.144 } 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "method": "sock_impl_set_options", 00:40:40.144 "params": { 00:40:40.144 "impl_name": "ssl", 00:40:40.144 "recv_buf_size": 4096, 00:40:40.144 "send_buf_size": 4096, 00:40:40.144 "enable_recv_pipe": true, 00:40:40.144 "enable_quickack": false, 00:40:40.144 "enable_placement_id": 0, 00:40:40.144 "enable_zerocopy_send_server": true, 00:40:40.144 "enable_zerocopy_send_client": false, 00:40:40.144 "zerocopy_threshold": 0, 00:40:40.144 "tls_version": 0, 00:40:40.144 "enable_ktls": false 00:40:40.144 } 00:40:40.144 } 00:40:40.144 ] 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "subsystem": "vmd", 00:40:40.144 "config": [] 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "subsystem": "accel", 00:40:40.144 "config": [ 00:40:40.144 { 00:40:40.144 "method": "accel_set_options", 00:40:40.144 "params": { 00:40:40.144 "small_cache_size": 128, 00:40:40.144 "large_cache_size": 16, 00:40:40.144 "task_count": 2048, 00:40:40.144 "sequence_count": 2048, 00:40:40.144 "buf_count": 2048 00:40:40.144 } 00:40:40.144 } 00:40:40.144 ] 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "subsystem": "bdev", 00:40:40.144 "config": [ 00:40:40.144 { 00:40:40.144 "method": "bdev_set_options", 00:40:40.144 "params": { 00:40:40.144 "bdev_io_pool_size": 65535, 00:40:40.144 "bdev_io_cache_size": 256, 00:40:40.144 "bdev_auto_examine": true, 00:40:40.144 "iobuf_small_cache_size": 128, 00:40:40.144 "iobuf_large_cache_size": 16 00:40:40.144 } 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "method": "bdev_raid_set_options", 00:40:40.144 "params": { 00:40:40.144 "process_window_size_kb": 1024 00:40:40.144 } 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "method": "bdev_iscsi_set_options", 00:40:40.144 "params": { 00:40:40.144 "timeout_sec": 30 00:40:40.144 } 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "method": "bdev_nvme_set_options", 00:40:40.144 "params": { 00:40:40.144 "action_on_timeout": "none", 00:40:40.144 "timeout_us": 0, 00:40:40.144 "timeout_admin_us": 0, 00:40:40.144 "keep_alive_timeout_ms": 10000, 00:40:40.144 "arbitration_burst": 0, 00:40:40.144 "low_priority_weight": 0, 00:40:40.144 "medium_priority_weight": 0, 00:40:40.144 "high_priority_weight": 0, 00:40:40.144 "nvme_adminq_poll_period_us": 10000, 00:40:40.144 "nvme_ioq_poll_period_us": 0, 00:40:40.144 "io_queue_requests": 0, 00:40:40.144 "delay_cmd_submit": true, 00:40:40.144 "transport_retry_count": 4, 00:40:40.144 "bdev_retry_count": 3, 00:40:40.144 "transport_ack_timeout": 0, 00:40:40.144 "ctrlr_loss_timeout_sec": 0, 00:40:40.144 "reconnect_delay_sec": 0, 00:40:40.144 "fast_io_fail_timeout_sec": 0, 00:40:40.144 "disable_auto_failback": false, 00:40:40.144 "generate_uuids": false, 00:40:40.144 "transport_tos": 0, 00:40:40.144 "nvme_error_stat": false, 00:40:40.144 "rdma_srq_size": 0, 00:40:40.144 "io_path_stat": false, 00:40:40.144 "allow_accel_sequence": false, 00:40:40.144 "rdma_max_cq_size": 0, 00:40:40.144 "rdma_cm_event_timeout_ms": 0, 00:40:40.144 "dhchap_digests": [ 00:40:40.144 "sha256", 00:40:40.144 "sha384", 00:40:40.144 "sha512" 00:40:40.144 ], 00:40:40.144 "dhchap_dhgroups": [ 00:40:40.144 "null", 00:40:40.144 "ffdhe2048", 00:40:40.144 "ffdhe3072", 00:40:40.144 "ffdhe4096", 00:40:40.144 "ffdhe6144", 00:40:40.144 "ffdhe8192" 00:40:40.144 ] 00:40:40.144 } 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "method": "bdev_nvme_set_hotplug", 00:40:40.144 "params": { 00:40:40.144 "period_us": 100000, 00:40:40.144 "enable": false 00:40:40.144 } 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "method": "bdev_malloc_create", 00:40:40.144 "params": { 00:40:40.144 "name": "malloc0", 00:40:40.144 "num_blocks": 8192, 00:40:40.144 "block_size": 4096, 00:40:40.144 "physical_block_size": 4096, 00:40:40.144 "uuid": "1f2d19bd-ed5e-4bee-9763-4fe18dd183c9", 00:40:40.144 "optimal_io_boundary": 0 00:40:40.144 } 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "method": "bdev_wait_for_examine" 00:40:40.144 } 00:40:40.144 ] 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "subsystem": "scsi", 00:40:40.144 "config": null 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "subsystem": "scheduler", 00:40:40.144 "config": [ 00:40:40.144 { 00:40:40.144 "method": "framework_set_scheduler", 00:40:40.144 "params": { 00:40:40.144 "name": "static" 00:40:40.144 } 00:40:40.144 } 00:40:40.144 ] 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "subsystem": "vhost_scsi", 00:40:40.144 "config": [] 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "subsystem": "vhost_blk", 00:40:40.144 "config": [] 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "subsystem": "ublk", 00:40:40.144 "config": [ 00:40:40.144 { 00:40:40.144 "method": "ublk_create_target", 00:40:40.144 "params": { 00:40:40.144 "cpumask": "1" 00:40:40.144 } 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "method": "ublk_start_disk", 00:40:40.144 "params": { 00:40:40.144 "bdev_name": "malloc0", 00:40:40.144 "ublk_id": 0, 00:40:40.144 "num_queues": 1, 00:40:40.144 "queue_depth": 128 00:40:40.144 } 00:40:40.144 } 00:40:40.144 ] 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "subsystem": "nbd", 00:40:40.144 "config": [] 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "subsystem": "nvmf", 00:40:40.144 "config": [ 00:40:40.144 { 00:40:40.144 "method": "nvmf_set_config", 00:40:40.144 "params": { 00:40:40.144 "discovery_filter": "match_any", 00:40:40.144 "admin_cmd_passthru": { 00:40:40.144 "identify_ctrlr": false 00:40:40.144 } 00:40:40.144 } 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "method": "nvmf_set_max_subsystems", 00:40:40.144 "params": { 00:40:40.144 "max_subsystems": 1024 00:40:40.144 } 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "method": "nvmf_set_crdt", 00:40:40.144 "params": { 00:40:40.144 "crdt1": 0, 00:40:40.144 "crdt2": 0, 00:40:40.144 "crdt3": 0 00:40:40.144 } 00:40:40.144 } 00:40:40.144 ] 00:40:40.144 }, 00:40:40.144 { 00:40:40.144 "subsystem": "iscsi", 00:40:40.144 "config": [ 00:40:40.144 { 00:40:40.144 "method": "iscsi_set_options", 00:40:40.144 "params": { 00:40:40.145 "node_base": "iqn.2016-06.io.spdk", 00:40:40.145 "max_sessions": 128, 00:40:40.145 "max_connections_per_session": 2, 00:40:40.145 "max_queue_depth": 64, 00:40:40.145 "default_time2wait": 2, 00:40:40.145 "default_time2retain": 20, 00:40:40.145 "first_burst_length": 8192, 00:40:40.145 "immediate_data": true, 00:40:40.145 "allow_duplicated_isid": false, 00:40:40.145 "error_recovery_level": 0, 00:40:40.145 "nop_timeout": 60, 00:40:40.145 "nop_in_interval": 30, 00:40:40.145 "disable_chap": false, 00:40:40.145 "require_chap": false, 00:40:40.145 "mutual_chap": false, 00:40:40.145 "chap_group": 0, 00:40:40.145 "max_large_datain_per_connection": 64, 00:40:40.145 "max_r2t_per_connection": 4, 00:40:40.145 "pdu_pool_size": 36864, 00:40:40.145 "immediate_data_pool_size": 16384, 00:40:40.145 "data_out_pool_size": 2048 00:40:40.145 } 00:40:40.145 } 00:40:40.145 ] 00:40:40.145 } 00:40:40.145 ] 00:40:40.145 }' 00:40:40.145 [2024-04-18 09:09:42.182789] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:40:40.145 [2024-04-18 09:09:42.183417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75852 ] 00:40:40.404 [2024-04-18 09:09:42.367953] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:40.662 [2024-04-18 09:09:42.630512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:42.038 [2024-04-18 09:09:43.830827] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:40:42.038 [2024-04-18 09:09:43.835076] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:40:42.038 [2024-04-18 09:09:43.835316] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:40:42.038 [2024-04-18 09:09:43.835362] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:40:42.038 [2024-04-18 09:09:43.835467] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:40:42.038 [2024-04-18 09:09:43.855421] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:40:42.038 [2024-04-18 09:09:43.855570] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:40:42.038 [2024-04-18 09:09:43.860411] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:40:42.038 [2024-04-18 09:09:43.860663] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:40:42.038 [2024-04-18 09:09:43.894435] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:40:42.038 09:09:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:40:42.038 09:09:43 -- common/autotest_common.sh@850 -- # return 0 00:40:42.038 09:09:43 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:40:42.038 09:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:40:42.038 09:09:43 -- common/autotest_common.sh@10 -- # set +x 00:40:42.038 09:09:43 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:40:42.038 09:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:40:42.038 09:09:43 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:40:42.038 09:09:43 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:40:42.038 09:09:43 -- ublk/ublk.sh@125 -- # killprocess 75852 00:40:42.038 09:09:43 -- common/autotest_common.sh@936 -- # '[' -z 75852 ']' 00:40:42.038 09:09:43 -- common/autotest_common.sh@940 -- # kill -0 75852 00:40:42.038 09:09:43 -- common/autotest_common.sh@941 -- # uname 00:40:42.038 09:09:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:40:42.038 09:09:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75852 00:40:42.038 killing process with pid 75852 00:40:42.038 09:09:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:40:42.038 09:09:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:40:42.038 09:09:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75852' 00:40:42.038 09:09:43 -- common/autotest_common.sh@955 -- # kill 75852 00:40:42.038 09:09:43 -- common/autotest_common.sh@960 -- # wait 75852 00:40:43.946 [2024-04-18 09:09:45.829756] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:40:43.946 [2024-04-18 09:09:45.858480] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:40:43.946 [2024-04-18 09:09:45.865518] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:40:43.946 [2024-04-18 09:09:45.874423] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:40:43.946 [2024-04-18 09:09:45.874688] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:40:43.946 [2024-04-18 09:09:45.874737] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:40:43.946 [2024-04-18 09:09:45.874858] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:40:43.946 [2024-04-18 09:09:45.878600] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:40:45.912 09:09:47 -- ublk/ublk.sh@126 -- # trap - EXIT 00:40:45.912 00:40:45.912 real 0m11.028s 00:40:45.912 user 0m9.554s 00:40:45.912 sys 0m2.341s 00:40:45.912 09:09:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:40:45.912 09:09:47 -- common/autotest_common.sh@10 -- # set +x 00:40:45.912 ************************************ 00:40:45.912 END TEST test_save_ublk_config 00:40:45.912 ************************************ 00:40:45.912 09:09:47 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:40:45.912 09:09:47 -- ublk/ublk.sh@139 -- # spdk_pid=75945 00:40:45.912 09:09:47 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:40:45.912 09:09:47 -- ublk/ublk.sh@141 -- # waitforlisten 75945 00:40:45.912 09:09:47 -- common/autotest_common.sh@817 -- # '[' -z 75945 ']' 00:40:45.912 09:09:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:45.912 09:09:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:40:45.912 09:09:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:45.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:45.912 09:09:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:40:45.912 09:09:47 -- common/autotest_common.sh@10 -- # set +x 00:40:45.912 [2024-04-18 09:09:47.790303] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:40:45.912 [2024-04-18 09:09:47.790763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75945 ] 00:40:45.912 [2024-04-18 09:09:47.974071] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:46.478 [2024-04-18 09:09:48.301939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:46.478 [2024-04-18 09:09:48.301973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:47.412 09:09:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:40:47.412 09:09:49 -- common/autotest_common.sh@850 -- # return 0 00:40:47.412 09:09:49 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:40:47.412 09:09:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:40:47.412 09:09:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:40:47.412 09:09:49 -- common/autotest_common.sh@10 -- # set +x 00:40:47.412 ************************************ 00:40:47.412 START TEST test_create_ublk 00:40:47.412 ************************************ 00:40:47.412 09:09:49 -- common/autotest_common.sh@1111 -- # test_create_ublk 00:40:47.412 09:09:49 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:40:47.412 09:09:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:40:47.412 09:09:49 -- common/autotest_common.sh@10 -- # set +x 00:40:47.412 [2024-04-18 09:09:49.474969] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:40:47.412 09:09:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:40:47.412 09:09:49 -- ublk/ublk.sh@33 -- # ublk_target= 00:40:47.412 09:09:49 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:40:47.412 09:09:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:40:47.412 09:09:49 -- common/autotest_common.sh@10 -- # set +x 00:40:47.979 09:09:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:40:47.979 09:09:49 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:40:47.979 09:09:49 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:40:47.979 09:09:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:40:47.979 09:09:49 -- common/autotest_common.sh@10 -- # set +x 00:40:47.979 [2024-04-18 09:09:49.851094] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:40:47.979 [2024-04-18 09:09:49.851808] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:40:47.979 [2024-04-18 09:09:49.864403] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:40:47.979 [2024-04-18 09:09:49.864534] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:40:47.979 [2024-04-18 09:09:49.889667] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:40:47.979 [2024-04-18 09:09:49.889945] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:40:47.979 [2024-04-18 09:09:49.915426] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:40:47.979 [2024-04-18 09:09:49.921642] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:40:47.979 [2024-04-18 09:09:49.968459] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:40:47.979 09:09:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:40:47.979 09:09:49 -- ublk/ublk.sh@37 -- # ublk_id=0 00:40:47.979 09:09:49 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:40:47.979 09:09:49 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:40:47.979 09:09:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:40:47.979 09:09:49 -- common/autotest_common.sh@10 -- # set +x 00:40:47.979 09:09:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:40:47.979 09:09:50 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:40:47.979 { 00:40:47.979 "ublk_device": "/dev/ublkb0", 00:40:47.979 "id": 0, 00:40:47.979 "queue_depth": 512, 00:40:47.979 "num_queues": 4, 00:40:47.979 "bdev_name": "Malloc0" 00:40:47.979 } 00:40:47.979 ]' 00:40:47.979 09:09:50 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:40:47.979 09:09:50 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:40:47.979 09:09:50 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:40:48.239 09:09:50 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:40:48.239 09:09:50 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:40:48.239 09:09:50 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:40:48.239 09:09:50 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:40:48.239 09:09:50 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:40:48.239 09:09:50 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:40:48.239 09:09:50 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:40:48.239 09:09:50 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:40:48.239 09:09:50 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:40:48.239 09:09:50 -- lvol/common.sh@41 -- # local offset=0 00:40:48.239 09:09:50 -- lvol/common.sh@42 -- # local size=134217728 00:40:48.239 09:09:50 -- lvol/common.sh@43 -- # local rw=write 00:40:48.239 09:09:50 -- lvol/common.sh@44 -- # local pattern=0xcc 00:40:48.239 09:09:50 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:40:48.239 09:09:50 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:40:48.239 09:09:50 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:40:48.239 09:09:50 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:40:48.239 09:09:50 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:40:48.239 09:09:50 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:40:48.239 fio: verification read phase will never start because write phase uses all of runtime 00:40:48.239 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:40:48.239 fio-3.35 00:40:48.239 Starting 1 process 00:41:00.508 00:41:00.508 fio_test: (groupid=0, jobs=1): err= 0: pid=76007: Thu Apr 18 09:10:00 2024 00:41:00.508 write: IOPS=11.6k, BW=45.2MiB/s (47.4MB/s)(452MiB/10001msec); 0 zone resets 00:41:00.508 clat (usec): min=42, max=13168, avg=85.13, stdev=201.11 00:41:00.508 lat (usec): min=43, max=13168, avg=85.82, stdev=201.13 00:41:00.508 clat percentiles (usec): 00:41:00.508 | 1.00th=[ 52], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 64], 00:41:00.508 | 30.00th=[ 67], 40.00th=[ 69], 50.00th=[ 71], 60.00th=[ 74], 00:41:00.508 | 70.00th=[ 79], 80.00th=[ 92], 90.00th=[ 133], 95.00th=[ 151], 00:41:00.508 | 99.00th=[ 192], 99.50th=[ 212], 99.90th=[ 277], 99.95th=[ 586], 00:41:00.508 | 99.99th=[12518] 00:41:00.508 bw ( KiB/s): min=23576, max=65928, per=97.98%, avg=45334.32, stdev=13922.69, samples=19 00:41:00.508 iops : min= 5894, max=16482, avg=11333.58, stdev=3480.67, samples=19 00:41:00.508 lat (usec) : 50=0.17%, 100=82.33%, 250=17.35%, 500=0.11%, 750=0.01% 00:41:00.508 lat (usec) : 1000=0.01% 00:41:00.508 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.02% 00:41:00.508 cpu : usr=3.06%, sys=8.82%, ctx=115704, majf=0, minf=797 00:41:00.508 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:00.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.508 issued rwts: total=0,115687,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.508 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:00.508 00:41:00.508 Run status group 0 (all jobs): 00:41:00.508 WRITE: bw=45.2MiB/s (47.4MB/s), 45.2MiB/s-45.2MiB/s (47.4MB/s-47.4MB/s), io=452MiB (474MB), run=10001-10001msec 00:41:00.508 00:41:00.508 Disk stats (read/write): 00:41:00.508 ublkb0: ios=0/114036, merge=0/0, ticks=0/8699, in_queue=8700, util=97.97% 00:41:00.509 09:10:00 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:41:00.509 09:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:00.509 09:10:00 -- common/autotest_common.sh@10 -- # set +x 00:41:00.509 [2024-04-18 09:10:00.469839] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:41:00.509 [2024-04-18 09:10:00.509473] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:41:00.509 [2024-04-18 09:10:00.511664] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:41:00.509 [2024-04-18 09:10:00.538430] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:41:00.509 [2024-04-18 09:10:00.539009] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:41:00.509 [2024-04-18 09:10:00.539137] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:41:00.509 09:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:00.509 09:10:00 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:41:00.509 09:10:00 -- common/autotest_common.sh@638 -- # local es=0 00:41:00.509 09:10:00 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:41:00.509 09:10:00 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:41:00.509 09:10:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:00.509 09:10:00 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:41:00.509 09:10:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:41:00.509 09:10:00 -- common/autotest_common.sh@641 -- # rpc_cmd ublk_stop_disk 0 00:41:00.509 09:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:00.509 09:10:00 -- common/autotest_common.sh@10 -- # set +x 00:41:00.509 [2024-04-18 09:10:00.546579] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:41:00.509 request: 00:41:00.509 { 00:41:00.509 "ublk_id": 0, 00:41:00.509 "method": "ublk_stop_disk", 00:41:00.509 "req_id": 1 00:41:00.509 } 00:41:00.509 Got JSON-RPC error response 00:41:00.509 response: 00:41:00.509 { 00:41:00.509 "code": -19, 00:41:00.509 "message": "No such device" 00:41:00.509 } 00:41:00.509 09:10:00 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:41:00.509 09:10:00 -- common/autotest_common.sh@641 -- # es=1 00:41:00.509 09:10:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:41:00.509 09:10:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:41:00.509 09:10:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:41:00.509 09:10:00 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:41:00.509 09:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:00.509 09:10:00 -- common/autotest_common.sh@10 -- # set +x 00:41:00.509 [2024-04-18 09:10:00.564565] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:41:00.509 [2024-04-18 09:10:00.572341] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:41:00.509 [2024-04-18 09:10:00.572402] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:41:00.509 09:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:00.509 09:10:00 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:41:00.509 09:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:00.509 09:10:00 -- common/autotest_common.sh@10 -- # set +x 00:41:00.509 09:10:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:00.509 09:10:01 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:41:00.509 09:10:01 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:41:00.509 09:10:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:00.509 09:10:01 -- common/autotest_common.sh@10 -- # set +x 00:41:00.509 09:10:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:00.509 09:10:01 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:41:00.509 09:10:01 -- lvol/common.sh@26 -- # jq length 00:41:00.509 09:10:01 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:41:00.509 09:10:01 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:41:00.509 09:10:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:00.509 09:10:01 -- common/autotest_common.sh@10 -- # set +x 00:41:00.509 09:10:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:00.509 09:10:01 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:41:00.509 09:10:01 -- lvol/common.sh@28 -- # jq length 00:41:00.509 09:10:01 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:41:00.509 00:41:00.509 real 0m11.679s 00:41:00.509 user 0m0.700s 00:41:00.509 sys 0m1.008s 00:41:00.509 09:10:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:00.509 09:10:01 -- common/autotest_common.sh@10 -- # set +x 00:41:00.509 ************************************ 00:41:00.509 END TEST test_create_ublk 00:41:00.509 ************************************ 00:41:00.509 09:10:01 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:41:00.509 09:10:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:00.509 09:10:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:00.509 09:10:01 -- common/autotest_common.sh@10 -- # set +x 00:41:00.509 ************************************ 00:41:00.509 START TEST test_create_multi_ublk 00:41:00.509 ************************************ 00:41:00.509 09:10:01 -- common/autotest_common.sh@1111 -- # test_create_multi_ublk 00:41:00.509 09:10:01 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:41:00.509 09:10:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:00.509 09:10:01 -- common/autotest_common.sh@10 -- # set +x 00:41:00.509 [2024-04-18 09:10:01.291558] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:41:00.509 09:10:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:00.509 09:10:01 -- ublk/ublk.sh@62 -- # ublk_target= 00:41:00.509 09:10:01 -- ublk/ublk.sh@64 -- # seq 0 3 00:41:00.509 09:10:01 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:00.509 09:10:01 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:41:00.509 09:10:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:00.509 09:10:01 -- common/autotest_common.sh@10 -- # set +x 00:41:00.509 09:10:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:00.509 09:10:01 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:41:00.509 09:10:01 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:41:00.509 09:10:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:00.509 09:10:01 -- common/autotest_common.sh@10 -- # set +x 00:41:00.509 [2024-04-18 09:10:01.677640] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:41:00.509 [2024-04-18 09:10:01.678399] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:41:00.509 [2024-04-18 09:10:01.678527] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:41:00.509 [2024-04-18 09:10:01.678577] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:41:00.509 [2024-04-18 09:10:01.706428] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:41:00.509 [2024-04-18 09:10:01.706682] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:41:00.509 [2024-04-18 09:10:01.732426] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:41:00.509 [2024-04-18 09:10:01.733470] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:41:00.509 [2024-04-18 09:10:01.750161] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:41:00.509 09:10:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:00.509 09:10:01 -- ublk/ublk.sh@68 -- # ublk_id=0 00:41:00.509 09:10:01 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:00.509 09:10:01 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:41:00.509 09:10:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:00.509 09:10:01 -- common/autotest_common.sh@10 -- # set +x 00:41:00.509 09:10:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:00.509 09:10:02 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:41:00.509 09:10:02 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:41:00.509 09:10:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:00.509 09:10:02 -- common/autotest_common.sh@10 -- # set +x 00:41:00.509 [2024-04-18 09:10:02.152719] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:41:00.509 [2024-04-18 09:10:02.165933] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:41:00.509 [2024-04-18 09:10:02.166148] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:41:00.509 [2024-04-18 09:10:02.166191] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:41:00.509 [2024-04-18 09:10:02.203469] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:41:00.509 [2024-04-18 09:10:02.203766] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:41:00.509 [2024-04-18 09:10:02.229418] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:41:00.509 [2024-04-18 09:10:02.230415] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:41:00.509 [2024-04-18 09:10:02.254521] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:41:00.509 09:10:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:00.509 09:10:02 -- ublk/ublk.sh@68 -- # ublk_id=1 00:41:00.509 09:10:02 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:00.509 09:10:02 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:41:00.509 09:10:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:00.509 09:10:02 -- common/autotest_common.sh@10 -- # set +x 00:41:00.776 09:10:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:00.776 09:10:02 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:41:00.776 09:10:02 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:41:00.776 09:10:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:00.776 09:10:02 -- common/autotest_common.sh@10 -- # set +x 00:41:00.776 [2024-04-18 09:10:02.668621] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:41:00.776 [2024-04-18 09:10:02.669297] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:41:00.776 [2024-04-18 09:10:02.669437] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:41:00.776 [2024-04-18 09:10:02.669487] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:41:00.776 [2024-04-18 09:10:02.704460] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:41:00.776 [2024-04-18 09:10:02.704660] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:41:00.776 [2024-04-18 09:10:02.730427] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:41:00.776 [2024-04-18 09:10:02.731374] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:41:00.776 [2024-04-18 09:10:02.757510] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:41:00.776 09:10:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:00.776 09:10:02 -- ublk/ublk.sh@68 -- # ublk_id=2 00:41:00.776 09:10:02 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:00.776 09:10:02 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:41:00.776 09:10:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:00.776 09:10:02 -- common/autotest_common.sh@10 -- # set +x 00:41:01.082 09:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:01.349 09:10:03 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:41:01.349 09:10:03 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:41:01.349 09:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:01.349 09:10:03 -- common/autotest_common.sh@10 -- # set +x 00:41:01.349 [2024-04-18 09:10:03.177631] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:41:01.349 [2024-04-18 09:10:03.178332] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:41:01.349 [2024-04-18 09:10:03.178358] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:41:01.349 [2024-04-18 09:10:03.178368] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:41:01.349 [2024-04-18 09:10:03.199433] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:41:01.349 [2024-04-18 09:10:03.199470] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:41:01.349 [2024-04-18 09:10:03.225426] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:41:01.349 [2024-04-18 09:10:03.226137] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:41:01.349 [2024-04-18 09:10:03.252474] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:41:01.349 09:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:01.349 09:10:03 -- ublk/ublk.sh@68 -- # ublk_id=3 00:41:01.349 09:10:03 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:41:01.349 09:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:01.349 09:10:03 -- common/autotest_common.sh@10 -- # set +x 00:41:01.349 09:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:01.349 09:10:03 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:41:01.349 { 00:41:01.349 "ublk_device": "/dev/ublkb0", 00:41:01.349 "id": 0, 00:41:01.349 "queue_depth": 512, 00:41:01.349 "num_queues": 4, 00:41:01.349 "bdev_name": "Malloc0" 00:41:01.349 }, 00:41:01.349 { 00:41:01.349 "ublk_device": "/dev/ublkb1", 00:41:01.349 "id": 1, 00:41:01.349 "queue_depth": 512, 00:41:01.349 "num_queues": 4, 00:41:01.349 "bdev_name": "Malloc1" 00:41:01.349 }, 00:41:01.349 { 00:41:01.349 "ublk_device": "/dev/ublkb2", 00:41:01.349 "id": 2, 00:41:01.349 "queue_depth": 512, 00:41:01.349 "num_queues": 4, 00:41:01.349 "bdev_name": "Malloc2" 00:41:01.349 }, 00:41:01.349 { 00:41:01.349 "ublk_device": "/dev/ublkb3", 00:41:01.349 "id": 3, 00:41:01.349 "queue_depth": 512, 00:41:01.349 "num_queues": 4, 00:41:01.349 "bdev_name": "Malloc3" 00:41:01.349 } 00:41:01.349 ]' 00:41:01.349 09:10:03 -- ublk/ublk.sh@72 -- # seq 0 3 00:41:01.349 09:10:03 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:01.349 09:10:03 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:41:01.349 09:10:03 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:41:01.349 09:10:03 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:41:01.349 09:10:03 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:41:01.349 09:10:03 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:41:01.349 09:10:03 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:41:01.349 09:10:03 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:41:01.619 09:10:03 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:41:01.619 09:10:03 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:41:01.619 09:10:03 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:41:01.619 09:10:03 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:01.619 09:10:03 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:41:01.619 09:10:03 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:41:01.619 09:10:03 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:41:01.619 09:10:03 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:41:01.619 09:10:03 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:41:01.619 09:10:03 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:41:01.619 09:10:03 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:41:01.619 09:10:03 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:41:01.619 09:10:03 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:41:01.891 09:10:03 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:41:01.892 09:10:03 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:01.892 09:10:03 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:41:01.892 09:10:03 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:41:01.892 09:10:03 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:41:01.892 09:10:03 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:41:01.892 09:10:03 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:41:01.892 09:10:03 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:41:01.892 09:10:03 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:41:01.892 09:10:03 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:41:01.892 09:10:03 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:41:01.892 09:10:03 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:41:01.892 09:10:03 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:01.892 09:10:03 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:41:02.153 09:10:04 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:41:02.153 09:10:04 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:41:02.153 09:10:04 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:41:02.153 09:10:04 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:41:02.153 09:10:04 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:41:02.153 09:10:04 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:41:02.153 09:10:04 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:41:02.153 09:10:04 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:41:02.153 09:10:04 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:41:02.153 09:10:04 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:41:02.153 09:10:04 -- ublk/ublk.sh@85 -- # seq 0 3 00:41:02.153 09:10:04 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:02.153 09:10:04 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:41:02.153 09:10:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:02.153 09:10:04 -- common/autotest_common.sh@10 -- # set +x 00:41:02.153 [2024-04-18 09:10:04.249552] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:41:02.410 [2024-04-18 09:10:04.299496] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:41:02.410 [2024-04-18 09:10:04.299799] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:41:02.410 [2024-04-18 09:10:04.328436] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:41:02.410 [2024-04-18 09:10:04.328890] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:41:02.410 [2024-04-18 09:10:04.328924] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:41:02.410 09:10:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:02.410 09:10:04 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:02.410 09:10:04 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:41:02.410 09:10:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:02.410 09:10:04 -- common/autotest_common.sh@10 -- # set +x 00:41:02.410 [2024-04-18 09:10:04.336599] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:41:02.410 [2024-04-18 09:10:04.374463] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:41:02.410 [2024-04-18 09:10:04.376137] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:41:02.410 [2024-04-18 09:10:04.392436] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:41:02.410 [2024-04-18 09:10:04.392819] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:41:02.410 [2024-04-18 09:10:04.392847] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:41:02.410 09:10:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:02.410 09:10:04 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:02.410 09:10:04 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:41:02.410 09:10:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:02.410 09:10:04 -- common/autotest_common.sh@10 -- # set +x 00:41:02.410 [2024-04-18 09:10:04.400569] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:41:02.410 [2024-04-18 09:10:04.443431] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:41:02.410 [2024-04-18 09:10:04.443678] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:41:02.410 [2024-04-18 09:10:04.470454] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:41:02.410 [2024-04-18 09:10:04.470852] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:41:02.410 [2024-04-18 09:10:04.470878] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:41:02.410 09:10:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:02.410 09:10:04 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:02.410 09:10:04 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:41:02.410 09:10:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:02.410 09:10:04 -- common/autotest_common.sh@10 -- # set +x 00:41:02.410 [2024-04-18 09:10:04.478581] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:41:02.666 [2024-04-18 09:10:04.521417] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:41:02.666 [2024-04-18 09:10:04.521712] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:41:02.666 [2024-04-18 09:10:04.548457] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:41:02.666 [2024-04-18 09:10:04.548885] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:41:02.666 [2024-04-18 09:10:04.548913] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:41:02.666 09:10:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:02.666 09:10:04 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:41:02.923 [2024-04-18 09:10:04.812585] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:41:02.923 [2024-04-18 09:10:04.820068] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:41:02.923 [2024-04-18 09:10:04.820126] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:41:02.923 09:10:04 -- ublk/ublk.sh@93 -- # seq 0 3 00:41:02.923 09:10:04 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:02.923 09:10:04 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:41:02.923 09:10:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:02.923 09:10:04 -- common/autotest_common.sh@10 -- # set +x 00:41:03.181 09:10:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:03.181 09:10:05 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:03.181 09:10:05 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:41:03.181 09:10:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:03.181 09:10:05 -- common/autotest_common.sh@10 -- # set +x 00:41:03.747 09:10:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:03.747 09:10:05 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:03.747 09:10:05 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:41:03.747 09:10:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:03.747 09:10:05 -- common/autotest_common.sh@10 -- # set +x 00:41:04.005 09:10:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:04.005 09:10:06 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:04.005 09:10:06 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:41:04.005 09:10:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:04.005 09:10:06 -- common/autotest_common.sh@10 -- # set +x 00:41:04.578 09:10:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:04.578 09:10:06 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:41:04.578 09:10:06 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:41:04.578 09:10:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:04.578 09:10:06 -- common/autotest_common.sh@10 -- # set +x 00:41:04.578 09:10:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:04.578 09:10:06 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:41:04.578 09:10:06 -- lvol/common.sh@26 -- # jq length 00:41:04.578 09:10:06 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:41:04.578 09:10:06 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:41:04.578 09:10:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:04.578 09:10:06 -- common/autotest_common.sh@10 -- # set +x 00:41:04.578 09:10:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:04.578 09:10:06 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:41:04.578 09:10:06 -- lvol/common.sh@28 -- # jq length 00:41:04.578 09:10:06 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:41:04.578 00:41:04.578 real 0m5.303s 00:41:04.578 user 0m1.133s 00:41:04.578 sys 0m0.234s 00:41:04.578 09:10:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:04.578 09:10:06 -- common/autotest_common.sh@10 -- # set +x 00:41:04.578 ************************************ 00:41:04.578 END TEST test_create_multi_ublk 00:41:04.578 ************************************ 00:41:04.578 09:10:06 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:04.578 09:10:06 -- ublk/ublk.sh@147 -- # cleanup 00:41:04.578 09:10:06 -- ublk/ublk.sh@130 -- # killprocess 75945 00:41:04.578 09:10:06 -- common/autotest_common.sh@936 -- # '[' -z 75945 ']' 00:41:04.578 09:10:06 -- common/autotest_common.sh@940 -- # kill -0 75945 00:41:04.578 09:10:06 -- common/autotest_common.sh@941 -- # uname 00:41:04.578 09:10:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:41:04.578 09:10:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75945 00:41:04.578 09:10:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:41:04.578 09:10:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:41:04.578 09:10:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75945' 00:41:04.578 killing process with pid 75945 00:41:04.578 09:10:06 -- common/autotest_common.sh@955 -- # kill 75945 00:41:04.578 09:10:06 -- common/autotest_common.sh@960 -- # wait 75945 00:41:05.954 [2024-04-18 09:10:07.911365] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:41:05.954 [2024-04-18 09:10:07.913819] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:41:07.331 ************************************ 00:41:07.331 END TEST ublk 00:41:07.331 ************************************ 00:41:07.331 00:41:07.331 real 0m33.023s 00:41:07.331 user 0m48.582s 00:41:07.331 sys 0m8.982s 00:41:07.331 09:10:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:41:07.331 09:10:09 -- common/autotest_common.sh@10 -- # set +x 00:41:07.331 09:10:09 -- spdk/autotest.sh@250 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:41:07.331 09:10:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:41:07.331 09:10:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:41:07.331 09:10:09 -- common/autotest_common.sh@10 -- # set +x 00:41:07.594 ************************************ 00:41:07.594 START TEST ublk_recovery 00:41:07.594 ************************************ 00:41:07.594 09:10:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:41:07.594 * Looking for test storage... 00:41:07.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:41:07.594 09:10:09 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:41:07.594 09:10:09 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:41:07.594 09:10:09 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:41:07.594 09:10:09 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:41:07.594 09:10:09 -- lvol/common.sh@9 -- # AIO_BS=4096 00:41:07.594 09:10:09 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:41:07.594 09:10:09 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:41:07.594 09:10:09 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:41:07.594 09:10:09 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:41:07.594 09:10:09 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:41:07.594 09:10:09 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76373 00:41:07.594 09:10:09 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:41:07.594 09:10:09 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:41:07.594 09:10:09 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76373 00:41:07.594 09:10:09 -- common/autotest_common.sh@817 -- # '[' -z 76373 ']' 00:41:07.594 09:10:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:07.594 09:10:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:41:07.594 09:10:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:07.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:07.594 09:10:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:41:07.594 09:10:09 -- common/autotest_common.sh@10 -- # set +x 00:41:07.861 [2024-04-18 09:10:09.743801] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:41:07.861 [2024-04-18 09:10:09.744156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76373 ] 00:41:07.861 [2024-04-18 09:10:09.928168] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:08.430 [2024-04-18 09:10:10.267343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:08.430 [2024-04-18 09:10:10.267346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:09.418 09:10:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:41:09.418 09:10:11 -- common/autotest_common.sh@850 -- # return 0 00:41:09.418 09:10:11 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:41:09.418 09:10:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:09.418 09:10:11 -- common/autotest_common.sh@10 -- # set +x 00:41:09.418 [2024-04-18 09:10:11.443841] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:41:09.418 09:10:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:09.418 09:10:11 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:41:09.418 09:10:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:09.418 09:10:11 -- common/autotest_common.sh@10 -- # set +x 00:41:09.676 malloc0 00:41:09.676 09:10:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:09.676 09:10:11 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:41:09.676 09:10:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:09.676 09:10:11 -- common/autotest_common.sh@10 -- # set +x 00:41:09.676 [2024-04-18 09:10:11.643086] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:41:09.676 [2024-04-18 09:10:11.643274] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:41:09.676 [2024-04-18 09:10:11.643292] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:41:09.676 [2024-04-18 09:10:11.643307] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:41:09.676 [2024-04-18 09:10:11.662411] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:41:09.676 [2024-04-18 09:10:11.662465] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:41:09.676 [2024-04-18 09:10:11.688416] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:41:09.676 [2024-04-18 09:10:11.688657] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:41:09.676 [2024-04-18 09:10:11.710876] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:41:09.676 1 00:41:09.676 09:10:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:09.676 09:10:11 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:41:11.050 09:10:12 -- ublk/ublk_recovery.sh@31 -- # fio_proc=76419 00:41:11.050 09:10:12 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:41:11.050 09:10:12 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:41:11.050 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:11.050 fio-3.35 00:41:11.050 Starting 1 process 00:41:16.315 09:10:17 -- ublk/ublk_recovery.sh@36 -- # kill -9 76373 00:41:16.315 09:10:17 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:41:21.580 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76373 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:41:21.580 09:10:22 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76546 00:41:21.580 09:10:22 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:41:21.580 09:10:22 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:41:21.580 09:10:22 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76546 00:41:21.580 09:10:22 -- common/autotest_common.sh@817 -- # '[' -z 76546 ']' 00:41:21.580 09:10:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:21.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:21.580 09:10:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:41:21.580 09:10:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:21.580 09:10:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:41:21.580 09:10:22 -- common/autotest_common.sh@10 -- # set +x 00:41:21.580 [2024-04-18 09:10:22.843757] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:41:21.580 [2024-04-18 09:10:22.844093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76546 ] 00:41:21.580 [2024-04-18 09:10:23.012818] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:21.580 [2024-04-18 09:10:23.286439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:21.580 [2024-04-18 09:10:23.286460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:22.511 09:10:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:41:22.511 09:10:24 -- common/autotest_common.sh@850 -- # return 0 00:41:22.511 09:10:24 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:41:22.511 09:10:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:22.511 09:10:24 -- common/autotest_common.sh@10 -- # set +x 00:41:22.511 [2024-04-18 09:10:24.383001] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:41:22.511 09:10:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:22.511 09:10:24 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:41:22.511 09:10:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:22.511 09:10:24 -- common/autotest_common.sh@10 -- # set +x 00:41:22.511 malloc0 00:41:22.511 09:10:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:22.511 09:10:24 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:41:22.511 09:10:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:41:22.511 09:10:24 -- common/autotest_common.sh@10 -- # set +x 00:41:22.511 [2024-04-18 09:10:24.588007] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:41:22.511 [2024-04-18 09:10:24.588067] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:41:22.511 [2024-04-18 09:10:24.588079] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:41:22.511 [2024-04-18 09:10:24.602452] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:41:22.511 [2024-04-18 09:10:24.602539] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:41:22.511 [2024-04-18 09:10:24.602673] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:41:22.511 1 00:41:22.511 09:10:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:41:22.511 09:10:24 -- ublk/ublk_recovery.sh@52 -- # wait 76419 00:41:22.768 [2024-04-18 09:10:24.628440] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:41:22.768 [2024-04-18 09:10:24.641814] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:41:22.768 [2024-04-18 09:10:24.665666] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:41:22.768 [2024-04-18 09:10:24.665717] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:42:19.079 00:42:19.079 fio_test: (groupid=0, jobs=1): err= 0: pid=76422: Thu Apr 18 09:11:12 2024 00:42:19.079 read: IOPS=18.4k, BW=72.0MiB/s (75.5MB/s)(4319MiB/60002msec) 00:42:19.079 slat (nsec): min=1905, max=1481.4k, avg=6543.38, stdev=3171.83 00:42:19.079 clat (usec): min=1288, max=6952.3k, avg=3405.20, stdev=51176.82 00:42:19.079 lat (usec): min=1297, max=6952.3k, avg=3411.74, stdev=51176.82 00:42:19.079 clat percentiles (usec): 00:42:19.079 | 1.00th=[ 2606], 5.00th=[ 2737], 10.00th=[ 2802], 20.00th=[ 2868], 00:42:19.079 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:42:19.079 | 70.00th=[ 3064], 80.00th=[ 3097], 90.00th=[ 3195], 95.00th=[ 3294], 00:42:19.079 | 99.00th=[ 3949], 99.50th=[ 4293], 99.90th=[15533], 99.95th=[15926], 00:42:19.079 | 99.99th=[16188] 00:42:19.079 bw ( KiB/s): min=25552, max=93880, per=100.00%, avg=82759.47, stdev=8298.90, samples=106 00:42:19.079 iops : min= 6388, max=23470, avg=20689.87, stdev=2074.73, samples=106 00:42:19.079 write: IOPS=18.4k, BW=72.0MiB/s (75.4MB/s)(4317MiB/60002msec); 0 zone resets 00:42:19.079 slat (nsec): min=1965, max=584878, avg=6743.25, stdev=2912.21 00:42:19.079 clat (usec): min=1414, max=6952.6k, avg=3527.78, stdev=54493.74 00:42:19.079 lat (usec): min=1426, max=6952.6k, avg=3534.52, stdev=54493.74 00:42:19.079 clat percentiles (usec): 00:42:19.079 | 1.00th=[ 2671], 5.00th=[ 2802], 10.00th=[ 2868], 20.00th=[ 2933], 00:42:19.079 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3097], 00:42:19.079 | 70.00th=[ 3130], 80.00th=[ 3195], 90.00th=[ 3294], 95.00th=[ 3392], 00:42:19.079 | 99.00th=[ 4015], 99.50th=[ 4359], 99.90th=[15533], 99.95th=[15926], 00:42:19.079 | 99.99th=[16188] 00:42:19.079 bw ( KiB/s): min=23968, max=91768, per=100.00%, avg=82725.36, stdev=8362.03, samples=106 00:42:19.079 iops : min= 5992, max=22942, avg=20681.34, stdev=2090.51, samples=106 00:42:19.079 lat (msec) : 2=0.01%, 4=99.01%, 10=0.75%, 20=0.23%, >=2000=0.01% 00:42:19.079 cpu : usr=10.26%, sys=24.28%, ctx=136901, majf=0, minf=13 00:42:19.079 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:42:19.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:19.079 issued rwts: total=1105632,1105211,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:19.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:19.079 00:42:19.079 Run status group 0 (all jobs): 00:42:19.079 READ: bw=72.0MiB/s (75.5MB/s), 72.0MiB/s-72.0MiB/s (75.5MB/s-75.5MB/s), io=4319MiB (4529MB), run=60002-60002msec 00:42:19.080 WRITE: bw=72.0MiB/s (75.4MB/s), 72.0MiB/s-72.0MiB/s (75.4MB/s-75.4MB/s), io=4317MiB (4527MB), run=60002-60002msec 00:42:19.080 00:42:19.080 Disk stats (read/write): 00:42:19.080 ublkb1: ios=1103245/1102884, merge=0/0, ticks=3709820/3768084, in_queue=7477904, util=99.90% 00:42:19.080 09:11:12 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:42:19.080 09:11:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:42:19.080 09:11:12 -- common/autotest_common.sh@10 -- # set +x 00:42:19.080 [2024-04-18 09:11:12.995962] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:42:19.080 [2024-04-18 09:11:13.029502] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:42:19.080 [2024-04-18 09:11:13.031069] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:42:19.080 [2024-04-18 09:11:13.056453] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:42:19.080 [2024-04-18 09:11:13.056913] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:42:19.080 [2024-04-18 09:11:13.057041] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:42:19.080 09:11:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:42:19.080 09:11:13 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:42:19.080 09:11:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:42:19.080 09:11:13 -- common/autotest_common.sh@10 -- # set +x 00:42:19.080 [2024-04-18 09:11:13.064577] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:42:19.080 [2024-04-18 09:11:13.082450] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:42:19.080 [2024-04-18 09:11:13.082519] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:42:19.080 09:11:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:42:19.080 09:11:13 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:42:19.080 09:11:13 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:42:19.080 09:11:13 -- ublk/ublk_recovery.sh@14 -- # killprocess 76546 00:42:19.080 09:11:13 -- common/autotest_common.sh@936 -- # '[' -z 76546 ']' 00:42:19.080 09:11:13 -- common/autotest_common.sh@940 -- # kill -0 76546 00:42:19.080 09:11:13 -- common/autotest_common.sh@941 -- # uname 00:42:19.080 09:11:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:42:19.080 09:11:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76546 00:42:19.080 09:11:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:42:19.080 09:11:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:42:19.080 09:11:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76546' 00:42:19.080 killing process with pid 76546 00:42:19.080 09:11:13 -- common/autotest_common.sh@955 -- # kill 76546 00:42:19.080 09:11:13 -- common/autotest_common.sh@960 -- # wait 76546 00:42:19.080 [2024-04-18 09:11:14.524865] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:42:19.080 [2024-04-18 09:11:14.525214] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:42:19.080 ************************************ 00:42:19.080 END TEST ublk_recovery 00:42:19.080 ************************************ 00:42:19.080 00:42:19.080 real 1m6.809s 00:42:19.080 user 1m46.795s 00:42:19.080 sys 0m36.172s 00:42:19.080 09:11:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:42:19.080 09:11:16 -- common/autotest_common.sh@10 -- # set +x 00:42:19.080 09:11:16 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:42:19.080 09:11:16 -- spdk/autotest.sh@258 -- # timing_exit lib 00:42:19.080 09:11:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:42:19.080 09:11:16 -- common/autotest_common.sh@10 -- # set +x 00:42:19.080 09:11:16 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:42:19.080 09:11:16 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:42:19.080 09:11:16 -- spdk/autotest.sh@277 -- # '[' 0 -eq 1 ']' 00:42:19.080 09:11:16 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:42:19.080 09:11:16 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:42:19.080 09:11:16 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:42:19.080 09:11:16 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:42:19.080 09:11:16 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:42:19.080 09:11:16 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:42:19.080 09:11:16 -- spdk/autotest.sh@337 -- # '[' 1 -eq 1 ']' 00:42:19.080 09:11:16 -- spdk/autotest.sh@338 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:42:19.080 09:11:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:42:19.080 09:11:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:42:19.080 09:11:16 -- common/autotest_common.sh@10 -- # set +x 00:42:19.080 ************************************ 00:42:19.080 START TEST ftl 00:42:19.080 ************************************ 00:42:19.080 09:11:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:42:19.080 * Looking for test storage... 00:42:19.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:42:19.080 09:11:16 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:42:19.080 09:11:16 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:42:19.080 09:11:16 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:42:19.080 09:11:16 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:42:19.080 09:11:16 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:42:19.080 09:11:16 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:42:19.080 09:11:16 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:19.080 09:11:16 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:42:19.080 09:11:16 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:42:19.080 09:11:16 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:19.080 09:11:16 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:19.080 09:11:16 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:42:19.080 09:11:16 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:42:19.080 09:11:16 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:19.080 09:11:16 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:19.080 09:11:16 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:42:19.080 09:11:16 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:42:19.080 09:11:16 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:19.080 09:11:16 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:19.080 09:11:16 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:42:19.080 09:11:16 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:42:19.080 09:11:16 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:19.080 09:11:16 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:19.080 09:11:16 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:19.080 09:11:16 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:19.080 09:11:16 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:42:19.080 09:11:16 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:42:19.080 09:11:16 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:19.080 09:11:16 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:19.080 09:11:16 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:19.080 09:11:16 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:42:19.080 09:11:16 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:42:19.080 09:11:16 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:42:19.080 09:11:16 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:42:19.080 09:11:16 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:42:19.080 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:42:19.080 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:42:19.080 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:42:19.080 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:42:19.080 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:42:19.080 09:11:17 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:42:19.080 09:11:17 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77376 00:42:19.080 09:11:17 -- ftl/ftl.sh@38 -- # waitforlisten 77376 00:42:19.080 09:11:17 -- common/autotest_common.sh@817 -- # '[' -z 77376 ']' 00:42:19.080 09:11:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:19.080 09:11:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:42:19.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:19.080 09:11:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:19.080 09:11:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:42:19.080 09:11:17 -- common/autotest_common.sh@10 -- # set +x 00:42:19.080 [2024-04-18 09:11:17.257917] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:42:19.080 [2024-04-18 09:11:17.258286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77376 ] 00:42:19.080 [2024-04-18 09:11:17.423980] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:19.080 [2024-04-18 09:11:17.700730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:19.080 09:11:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:42:19.080 09:11:18 -- common/autotest_common.sh@850 -- # return 0 00:42:19.080 09:11:18 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:42:19.080 09:11:18 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:42:19.080 09:11:19 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:42:19.080 09:11:19 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:42:19.080 09:11:20 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:42:19.080 09:11:20 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:42:19.080 09:11:20 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:42:19.080 09:11:20 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:42:19.080 09:11:20 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:42:19.080 09:11:20 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:42:19.080 09:11:20 -- ftl/ftl.sh@50 -- # break 00:42:19.080 09:11:20 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:42:19.080 09:11:20 -- ftl/ftl.sh@59 -- # base_size=1310720 00:42:19.080 09:11:20 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:42:19.080 09:11:20 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:42:19.080 09:11:20 -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:42:19.080 09:11:20 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:42:19.080 09:11:20 -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:42:19.080 09:11:20 -- ftl/ftl.sh@63 -- # break 00:42:19.080 09:11:20 -- ftl/ftl.sh@66 -- # killprocess 77376 00:42:19.080 09:11:20 -- common/autotest_common.sh@936 -- # '[' -z 77376 ']' 00:42:19.080 09:11:20 -- common/autotest_common.sh@940 -- # kill -0 77376 00:42:19.081 09:11:20 -- common/autotest_common.sh@941 -- # uname 00:42:19.081 09:11:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:42:19.081 09:11:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77376 00:42:19.081 09:11:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:42:19.081 09:11:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:42:19.081 09:11:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77376' 00:42:19.081 killing process with pid 77376 00:42:19.081 09:11:20 -- common/autotest_common.sh@955 -- # kill 77376 00:42:19.081 09:11:20 -- common/autotest_common.sh@960 -- # wait 77376 00:42:21.608 09:11:23 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:42:21.608 09:11:23 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:42:21.608 09:11:23 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:42:21.608 09:11:23 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:42:21.608 09:11:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:42:21.608 09:11:23 -- common/autotest_common.sh@10 -- # set +x 00:42:21.608 ************************************ 00:42:21.608 START TEST ftl_fio_basic 00:42:21.608 ************************************ 00:42:21.608 09:11:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:42:21.867 * Looking for test storage... 00:42:21.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:42:21.867 09:11:23 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:42:21.867 09:11:23 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:42:21.867 09:11:23 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:42:21.867 09:11:23 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:42:21.867 09:11:23 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:42:21.867 09:11:23 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:42:21.867 09:11:23 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:21.867 09:11:23 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:42:21.867 09:11:23 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:42:21.867 09:11:23 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:21.867 09:11:23 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:21.867 09:11:23 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:42:21.867 09:11:23 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:42:21.867 09:11:23 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:21.867 09:11:23 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:21.867 09:11:23 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:42:21.867 09:11:23 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:42:21.867 09:11:23 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:21.867 09:11:23 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:21.867 09:11:23 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:42:21.867 09:11:23 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:42:21.867 09:11:23 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:21.867 09:11:23 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:21.867 09:11:23 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:21.867 09:11:23 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:21.867 09:11:23 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:42:21.867 09:11:23 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:42:21.867 09:11:23 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:21.867 09:11:23 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:21.867 09:11:23 -- ftl/fio.sh@11 -- # declare -A suite 00:42:21.867 09:11:23 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:42:21.867 09:11:23 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:42:21.867 09:11:23 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:42:21.867 09:11:23 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:21.867 09:11:23 -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:42:21.867 09:11:23 -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:42:21.867 09:11:23 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:42:21.867 09:11:23 -- ftl/fio.sh@26 -- # uuid= 00:42:21.867 09:11:23 -- ftl/fio.sh@27 -- # timeout=240 00:42:21.867 09:11:23 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:42:21.867 09:11:23 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:42:21.867 09:11:23 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:42:21.867 09:11:23 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:42:21.867 09:11:23 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:42:21.867 09:11:23 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:42:21.867 09:11:23 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:42:21.867 09:11:23 -- ftl/fio.sh@45 -- # svcpid=77526 00:42:21.867 09:11:23 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:42:21.867 09:11:23 -- ftl/fio.sh@46 -- # waitforlisten 77526 00:42:21.867 09:11:23 -- common/autotest_common.sh@817 -- # '[' -z 77526 ']' 00:42:21.867 09:11:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:21.867 09:11:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:42:21.867 09:11:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:21.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:21.867 09:11:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:42:21.867 09:11:23 -- common/autotest_common.sh@10 -- # set +x 00:42:21.867 [2024-04-18 09:11:23.907157] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:42:21.867 [2024-04-18 09:11:23.910966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77526 ] 00:42:22.178 [2024-04-18 09:11:24.104753] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:22.436 [2024-04-18 09:11:24.408400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:22.436 [2024-04-18 09:11:24.408460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:42:22.436 [2024-04-18 09:11:24.408451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:23.809 09:11:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:42:23.809 09:11:25 -- common/autotest_common.sh@850 -- # return 0 00:42:23.809 09:11:25 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:42:23.809 09:11:25 -- ftl/common.sh@54 -- # local name=nvme0 00:42:23.809 09:11:25 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:42:23.809 09:11:25 -- ftl/common.sh@56 -- # local size=103424 00:42:23.809 09:11:25 -- ftl/common.sh@59 -- # local base_bdev 00:42:23.809 09:11:25 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:42:23.809 09:11:25 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:42:23.809 09:11:25 -- ftl/common.sh@62 -- # local base_size 00:42:23.809 09:11:25 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:42:23.809 09:11:25 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:42:23.809 09:11:25 -- common/autotest_common.sh@1365 -- # local bdev_info 00:42:23.809 09:11:25 -- common/autotest_common.sh@1366 -- # local bs 00:42:23.809 09:11:25 -- common/autotest_common.sh@1367 -- # local nb 00:42:23.809 09:11:25 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:42:24.066 09:11:26 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:42:24.066 { 00:42:24.066 "name": "nvme0n1", 00:42:24.066 "aliases": [ 00:42:24.066 "9e1853db-10d5-49e2-9c84-09d7fc19fecc" 00:42:24.066 ], 00:42:24.066 "product_name": "NVMe disk", 00:42:24.066 "block_size": 4096, 00:42:24.066 "num_blocks": 1310720, 00:42:24.066 "uuid": "9e1853db-10d5-49e2-9c84-09d7fc19fecc", 00:42:24.066 "assigned_rate_limits": { 00:42:24.066 "rw_ios_per_sec": 0, 00:42:24.066 "rw_mbytes_per_sec": 0, 00:42:24.066 "r_mbytes_per_sec": 0, 00:42:24.066 "w_mbytes_per_sec": 0 00:42:24.066 }, 00:42:24.066 "claimed": false, 00:42:24.066 "zoned": false, 00:42:24.066 "supported_io_types": { 00:42:24.066 "read": true, 00:42:24.066 "write": true, 00:42:24.066 "unmap": true, 00:42:24.066 "write_zeroes": true, 00:42:24.066 "flush": true, 00:42:24.066 "reset": true, 00:42:24.066 "compare": true, 00:42:24.066 "compare_and_write": false, 00:42:24.066 "abort": true, 00:42:24.066 "nvme_admin": true, 00:42:24.066 "nvme_io": true 00:42:24.066 }, 00:42:24.066 "driver_specific": { 00:42:24.066 "nvme": [ 00:42:24.066 { 00:42:24.066 "pci_address": "0000:00:11.0", 00:42:24.066 "trid": { 00:42:24.066 "trtype": "PCIe", 00:42:24.066 "traddr": "0000:00:11.0" 00:42:24.066 }, 00:42:24.066 "ctrlr_data": { 00:42:24.066 "cntlid": 0, 00:42:24.066 "vendor_id": "0x1b36", 00:42:24.066 "model_number": "QEMU NVMe Ctrl", 00:42:24.066 "serial_number": "12341", 00:42:24.066 "firmware_revision": "8.0.0", 00:42:24.066 "subnqn": "nqn.2019-08.org.qemu:12341", 00:42:24.066 "oacs": { 00:42:24.066 "security": 0, 00:42:24.066 "format": 1, 00:42:24.066 "firmware": 0, 00:42:24.066 "ns_manage": 1 00:42:24.066 }, 00:42:24.066 "multi_ctrlr": false, 00:42:24.066 "ana_reporting": false 00:42:24.066 }, 00:42:24.066 "vs": { 00:42:24.066 "nvme_version": "1.4" 00:42:24.066 }, 00:42:24.066 "ns_data": { 00:42:24.066 "id": 1, 00:42:24.066 "can_share": false 00:42:24.066 } 00:42:24.066 } 00:42:24.066 ], 00:42:24.066 "mp_policy": "active_passive" 00:42:24.066 } 00:42:24.066 } 00:42:24.066 ]' 00:42:24.066 09:11:26 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:42:24.066 09:11:26 -- common/autotest_common.sh@1369 -- # bs=4096 00:42:24.066 09:11:26 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:42:24.324 09:11:26 -- common/autotest_common.sh@1370 -- # nb=1310720 00:42:24.324 09:11:26 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:42:24.324 09:11:26 -- common/autotest_common.sh@1374 -- # echo 5120 00:42:24.324 09:11:26 -- ftl/common.sh@63 -- # base_size=5120 00:42:24.324 09:11:26 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:42:24.324 09:11:26 -- ftl/common.sh@67 -- # clear_lvols 00:42:24.324 09:11:26 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:42:24.324 09:11:26 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:42:24.581 09:11:26 -- ftl/common.sh@28 -- # stores= 00:42:24.581 09:11:26 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:42:24.838 09:11:26 -- ftl/common.sh@68 -- # lvs=b716af86-9225-4092-8dfc-43febcee341e 00:42:24.839 09:11:26 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b716af86-9225-4092-8dfc-43febcee341e 00:42:25.097 09:11:27 -- ftl/fio.sh@48 -- # split_bdev=427357d9-6108-4677-8a95-24357e0ad567 00:42:25.097 09:11:27 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 427357d9-6108-4677-8a95-24357e0ad567 00:42:25.097 09:11:27 -- ftl/common.sh@35 -- # local name=nvc0 00:42:25.097 09:11:27 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:42:25.097 09:11:27 -- ftl/common.sh@37 -- # local base_bdev=427357d9-6108-4677-8a95-24357e0ad567 00:42:25.097 09:11:27 -- ftl/common.sh@38 -- # local cache_size= 00:42:25.097 09:11:27 -- ftl/common.sh@41 -- # get_bdev_size 427357d9-6108-4677-8a95-24357e0ad567 00:42:25.097 09:11:27 -- common/autotest_common.sh@1364 -- # local bdev_name=427357d9-6108-4677-8a95-24357e0ad567 00:42:25.097 09:11:27 -- common/autotest_common.sh@1365 -- # local bdev_info 00:42:25.097 09:11:27 -- common/autotest_common.sh@1366 -- # local bs 00:42:25.097 09:11:27 -- common/autotest_common.sh@1367 -- # local nb 00:42:25.097 09:11:27 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 427357d9-6108-4677-8a95-24357e0ad567 00:42:25.355 09:11:27 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:42:25.355 { 00:42:25.355 "name": "427357d9-6108-4677-8a95-24357e0ad567", 00:42:25.355 "aliases": [ 00:42:25.355 "lvs/nvme0n1p0" 00:42:25.355 ], 00:42:25.355 "product_name": "Logical Volume", 00:42:25.355 "block_size": 4096, 00:42:25.355 "num_blocks": 26476544, 00:42:25.355 "uuid": "427357d9-6108-4677-8a95-24357e0ad567", 00:42:25.355 "assigned_rate_limits": { 00:42:25.355 "rw_ios_per_sec": 0, 00:42:25.355 "rw_mbytes_per_sec": 0, 00:42:25.355 "r_mbytes_per_sec": 0, 00:42:25.355 "w_mbytes_per_sec": 0 00:42:25.355 }, 00:42:25.355 "claimed": false, 00:42:25.355 "zoned": false, 00:42:25.355 "supported_io_types": { 00:42:25.355 "read": true, 00:42:25.355 "write": true, 00:42:25.355 "unmap": true, 00:42:25.355 "write_zeroes": true, 00:42:25.355 "flush": false, 00:42:25.355 "reset": true, 00:42:25.355 "compare": false, 00:42:25.355 "compare_and_write": false, 00:42:25.355 "abort": false, 00:42:25.355 "nvme_admin": false, 00:42:25.355 "nvme_io": false 00:42:25.355 }, 00:42:25.355 "driver_specific": { 00:42:25.355 "lvol": { 00:42:25.355 "lvol_store_uuid": "b716af86-9225-4092-8dfc-43febcee341e", 00:42:25.355 "base_bdev": "nvme0n1", 00:42:25.355 "thin_provision": true, 00:42:25.355 "snapshot": false, 00:42:25.355 "clone": false, 00:42:25.355 "esnap_clone": false 00:42:25.355 } 00:42:25.355 } 00:42:25.355 } 00:42:25.355 ]' 00:42:25.355 09:11:27 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:42:25.612 09:11:27 -- common/autotest_common.sh@1369 -- # bs=4096 00:42:25.612 09:11:27 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:42:25.612 09:11:27 -- common/autotest_common.sh@1370 -- # nb=26476544 00:42:25.612 09:11:27 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:42:25.612 09:11:27 -- common/autotest_common.sh@1374 -- # echo 103424 00:42:25.613 09:11:27 -- ftl/common.sh@41 -- # local base_size=5171 00:42:25.613 09:11:27 -- ftl/common.sh@44 -- # local nvc_bdev 00:42:25.613 09:11:27 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:42:25.871 09:11:27 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:42:25.871 09:11:27 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:42:25.871 09:11:27 -- ftl/common.sh@48 -- # get_bdev_size 427357d9-6108-4677-8a95-24357e0ad567 00:42:25.871 09:11:27 -- common/autotest_common.sh@1364 -- # local bdev_name=427357d9-6108-4677-8a95-24357e0ad567 00:42:25.871 09:11:27 -- common/autotest_common.sh@1365 -- # local bdev_info 00:42:25.871 09:11:27 -- common/autotest_common.sh@1366 -- # local bs 00:42:25.872 09:11:27 -- common/autotest_common.sh@1367 -- # local nb 00:42:25.872 09:11:27 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 427357d9-6108-4677-8a95-24357e0ad567 00:42:26.129 09:11:28 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:42:26.129 { 00:42:26.129 "name": "427357d9-6108-4677-8a95-24357e0ad567", 00:42:26.129 "aliases": [ 00:42:26.129 "lvs/nvme0n1p0" 00:42:26.129 ], 00:42:26.129 "product_name": "Logical Volume", 00:42:26.129 "block_size": 4096, 00:42:26.129 "num_blocks": 26476544, 00:42:26.129 "uuid": "427357d9-6108-4677-8a95-24357e0ad567", 00:42:26.129 "assigned_rate_limits": { 00:42:26.129 "rw_ios_per_sec": 0, 00:42:26.129 "rw_mbytes_per_sec": 0, 00:42:26.129 "r_mbytes_per_sec": 0, 00:42:26.129 "w_mbytes_per_sec": 0 00:42:26.129 }, 00:42:26.129 "claimed": false, 00:42:26.129 "zoned": false, 00:42:26.129 "supported_io_types": { 00:42:26.129 "read": true, 00:42:26.129 "write": true, 00:42:26.129 "unmap": true, 00:42:26.129 "write_zeroes": true, 00:42:26.129 "flush": false, 00:42:26.129 "reset": true, 00:42:26.129 "compare": false, 00:42:26.129 "compare_and_write": false, 00:42:26.129 "abort": false, 00:42:26.129 "nvme_admin": false, 00:42:26.129 "nvme_io": false 00:42:26.129 }, 00:42:26.129 "driver_specific": { 00:42:26.129 "lvol": { 00:42:26.129 "lvol_store_uuid": "b716af86-9225-4092-8dfc-43febcee341e", 00:42:26.129 "base_bdev": "nvme0n1", 00:42:26.129 "thin_provision": true, 00:42:26.129 "snapshot": false, 00:42:26.129 "clone": false, 00:42:26.129 "esnap_clone": false 00:42:26.129 } 00:42:26.129 } 00:42:26.129 } 00:42:26.129 ]' 00:42:26.129 09:11:28 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:42:26.129 09:11:28 -- common/autotest_common.sh@1369 -- # bs=4096 00:42:26.129 09:11:28 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:42:26.129 09:11:28 -- common/autotest_common.sh@1370 -- # nb=26476544 00:42:26.129 09:11:28 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:42:26.129 09:11:28 -- common/autotest_common.sh@1374 -- # echo 103424 00:42:26.129 09:11:28 -- ftl/common.sh@48 -- # cache_size=5171 00:42:26.129 09:11:28 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:42:26.386 09:11:28 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:42:26.386 09:11:28 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:42:26.386 09:11:28 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:42:26.386 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:42:26.386 09:11:28 -- ftl/fio.sh@56 -- # get_bdev_size 427357d9-6108-4677-8a95-24357e0ad567 00:42:26.386 09:11:28 -- common/autotest_common.sh@1364 -- # local bdev_name=427357d9-6108-4677-8a95-24357e0ad567 00:42:26.386 09:11:28 -- common/autotest_common.sh@1365 -- # local bdev_info 00:42:26.386 09:11:28 -- common/autotest_common.sh@1366 -- # local bs 00:42:26.386 09:11:28 -- common/autotest_common.sh@1367 -- # local nb 00:42:26.386 09:11:28 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 427357d9-6108-4677-8a95-24357e0ad567 00:42:26.994 09:11:28 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:42:26.994 { 00:42:26.994 "name": "427357d9-6108-4677-8a95-24357e0ad567", 00:42:26.994 "aliases": [ 00:42:26.994 "lvs/nvme0n1p0" 00:42:26.994 ], 00:42:26.994 "product_name": "Logical Volume", 00:42:26.994 "block_size": 4096, 00:42:26.994 "num_blocks": 26476544, 00:42:26.994 "uuid": "427357d9-6108-4677-8a95-24357e0ad567", 00:42:26.994 "assigned_rate_limits": { 00:42:26.994 "rw_ios_per_sec": 0, 00:42:26.994 "rw_mbytes_per_sec": 0, 00:42:26.994 "r_mbytes_per_sec": 0, 00:42:26.994 "w_mbytes_per_sec": 0 00:42:26.994 }, 00:42:26.994 "claimed": false, 00:42:26.994 "zoned": false, 00:42:26.994 "supported_io_types": { 00:42:26.994 "read": true, 00:42:26.994 "write": true, 00:42:26.994 "unmap": true, 00:42:26.994 "write_zeroes": true, 00:42:26.994 "flush": false, 00:42:26.994 "reset": true, 00:42:26.994 "compare": false, 00:42:26.994 "compare_and_write": false, 00:42:26.994 "abort": false, 00:42:26.994 "nvme_admin": false, 00:42:26.994 "nvme_io": false 00:42:26.994 }, 00:42:26.994 "driver_specific": { 00:42:26.994 "lvol": { 00:42:26.994 "lvol_store_uuid": "b716af86-9225-4092-8dfc-43febcee341e", 00:42:26.994 "base_bdev": "nvme0n1", 00:42:26.994 "thin_provision": true, 00:42:26.994 "snapshot": false, 00:42:26.994 "clone": false, 00:42:26.994 "esnap_clone": false 00:42:26.994 } 00:42:26.994 } 00:42:26.994 } 00:42:26.994 ]' 00:42:26.994 09:11:28 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:42:26.994 09:11:28 -- common/autotest_common.sh@1369 -- # bs=4096 00:42:26.994 09:11:28 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:42:26.994 09:11:28 -- common/autotest_common.sh@1370 -- # nb=26476544 00:42:26.994 09:11:28 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:42:26.994 09:11:28 -- common/autotest_common.sh@1374 -- # echo 103424 00:42:26.994 09:11:28 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:42:26.994 09:11:28 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:42:26.994 09:11:28 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 427357d9-6108-4677-8a95-24357e0ad567 -c nvc0n1p0 --l2p_dram_limit 60 00:42:26.994 [2024-04-18 09:11:29.080144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.994 [2024-04-18 09:11:29.080462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:42:26.994 [2024-04-18 09:11:29.080581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:42:26.994 [2024-04-18 09:11:29.080681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.994 [2024-04-18 09:11:29.080824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.994 [2024-04-18 09:11:29.080957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:26.994 [2024-04-18 09:11:29.081055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:42:26.994 [2024-04-18 09:11:29.081119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.994 [2024-04-18 09:11:29.081188] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:42:26.994 [2024-04-18 09:11:29.082751] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:42:26.994 [2024-04-18 09:11:29.082941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.994 [2024-04-18 09:11:29.083031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:26.994 [2024-04-18 09:11:29.083079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.758 ms 00:42:26.994 [2024-04-18 09:11:29.083154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.994 [2024-04-18 09:11:29.083455] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4f7ae9dd-0c91-4528-ab01-5e6b80a3ef70 00:42:26.994 [2024-04-18 09:11:29.085143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.994 [2024-04-18 09:11:29.085285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:42:26.994 [2024-04-18 09:11:29.085407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:42:26.994 [2024-04-18 09:11:29.085490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.994 [2024-04-18 09:11:29.093508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.994 [2024-04-18 09:11:29.093811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:26.994 [2024-04-18 09:11:29.093955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.867 ms 00:42:26.994 [2024-04-18 09:11:29.094003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.994 [2024-04-18 09:11:29.094157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.994 [2024-04-18 09:11:29.094217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:26.994 [2024-04-18 09:11:29.094335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:42:26.994 [2024-04-18 09:11:29.094385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.994 [2024-04-18 09:11:29.094512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.994 [2024-04-18 09:11:29.094588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:42:26.994 [2024-04-18 09:11:29.094742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:42:26.994 [2024-04-18 09:11:29.094794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.994 [2024-04-18 09:11:29.094872] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:42:27.253 [2024-04-18 09:11:29.102021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.253 [2024-04-18 09:11:29.102260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:27.253 [2024-04-18 09:11:29.102355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.162 ms 00:42:27.253 [2024-04-18 09:11:29.102417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.253 [2024-04-18 09:11:29.102529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.253 [2024-04-18 09:11:29.102662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:42:27.253 [2024-04-18 09:11:29.102727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:42:27.253 [2024-04-18 09:11:29.102764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.253 [2024-04-18 09:11:29.102863] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:42:27.253 [2024-04-18 09:11:29.103052] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:42:27.253 [2024-04-18 09:11:29.103132] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:42:27.253 [2024-04-18 09:11:29.103196] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:42:27.253 [2024-04-18 09:11:29.103322] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:42:27.253 [2024-04-18 09:11:29.103404] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:42:27.253 [2024-04-18 09:11:29.103475] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:42:27.253 [2024-04-18 09:11:29.103594] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:42:27.253 [2024-04-18 09:11:29.103635] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:42:27.253 [2024-04-18 09:11:29.103715] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:42:27.253 [2024-04-18 09:11:29.103773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.253 [2024-04-18 09:11:29.103814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:42:27.253 [2024-04-18 09:11:29.103856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.919 ms 00:42:27.253 [2024-04-18 09:11:29.103972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.254 [2024-04-18 09:11:29.104124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.254 [2024-04-18 09:11:29.104163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:42:27.254 [2024-04-18 09:11:29.104203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:42:27.254 [2024-04-18 09:11:29.104241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.254 [2024-04-18 09:11:29.104473] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:42:27.254 [2024-04-18 09:11:29.104529] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:42:27.254 [2024-04-18 09:11:29.104577] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:27.254 [2024-04-18 09:11:29.104614] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:27.254 [2024-04-18 09:11:29.104657] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:42:27.254 [2024-04-18 09:11:29.104741] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:42:27.254 [2024-04-18 09:11:29.104790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:42:27.254 [2024-04-18 09:11:29.104825] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:42:27.254 [2024-04-18 09:11:29.104865] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:42:27.254 [2024-04-18 09:11:29.104900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:27.254 [2024-04-18 09:11:29.104939] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:42:27.254 [2024-04-18 09:11:29.104974] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:42:27.254 [2024-04-18 09:11:29.105015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:27.254 [2024-04-18 09:11:29.105121] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:42:27.254 [2024-04-18 09:11:29.105172] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:42:27.254 [2024-04-18 09:11:29.105207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:27.254 [2024-04-18 09:11:29.105246] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:42:27.254 [2024-04-18 09:11:29.105281] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:42:27.254 [2024-04-18 09:11:29.105318] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:27.254 [2024-04-18 09:11:29.105354] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:42:27.254 [2024-04-18 09:11:29.105418] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:42:27.254 [2024-04-18 09:11:29.105532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:42:27.254 [2024-04-18 09:11:29.105579] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:42:27.254 [2024-04-18 09:11:29.105614] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:42:27.254 [2024-04-18 09:11:29.105652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:42:27.254 [2024-04-18 09:11:29.105687] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:42:27.254 [2024-04-18 09:11:29.105724] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:42:27.254 [2024-04-18 09:11:29.105760] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:42:27.254 [2024-04-18 09:11:29.105859] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:42:27.254 [2024-04-18 09:11:29.105898] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:42:27.254 [2024-04-18 09:11:29.105936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:42:27.254 [2024-04-18 09:11:29.105972] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:42:27.254 [2024-04-18 09:11:29.106010] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:42:27.254 [2024-04-18 09:11:29.106045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:42:27.254 [2024-04-18 09:11:29.106082] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:42:27.254 [2024-04-18 09:11:29.106118] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:42:27.254 [2024-04-18 09:11:29.106236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:27.254 [2024-04-18 09:11:29.106272] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:42:27.254 [2024-04-18 09:11:29.106311] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:42:27.254 [2024-04-18 09:11:29.106347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:27.254 [2024-04-18 09:11:29.106397] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:42:27.254 [2024-04-18 09:11:29.106436] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:42:27.254 [2024-04-18 09:11:29.106487] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:27.254 [2024-04-18 09:11:29.106562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:27.254 [2024-04-18 09:11:29.106667] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:42:27.254 [2024-04-18 09:11:29.106709] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:42:27.254 [2024-04-18 09:11:29.106748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:42:27.254 [2024-04-18 09:11:29.106784] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:42:27.254 [2024-04-18 09:11:29.106822] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:42:27.254 [2024-04-18 09:11:29.106857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:42:27.254 [2024-04-18 09:11:29.106904] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:42:27.254 [2024-04-18 09:11:29.107061] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:27.254 [2024-04-18 09:11:29.107136] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:42:27.254 [2024-04-18 09:11:29.107194] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:42:27.254 [2024-04-18 09:11:29.107337] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:42:27.254 [2024-04-18 09:11:29.107425] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:42:27.254 [2024-04-18 09:11:29.107542] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:42:27.254 [2024-04-18 09:11:29.107601] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:42:27.254 [2024-04-18 09:11:29.107695] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:42:27.254 [2024-04-18 09:11:29.107771] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:42:27.254 [2024-04-18 09:11:29.107836] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:42:27.254 [2024-04-18 09:11:29.107934] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:42:27.254 [2024-04-18 09:11:29.108150] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:42:27.254 [2024-04-18 09:11:29.108211] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:42:27.254 [2024-04-18 09:11:29.108272] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:42:27.254 [2024-04-18 09:11:29.108395] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:42:27.254 [2024-04-18 09:11:29.108463] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:27.254 [2024-04-18 09:11:29.108555] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:27.254 [2024-04-18 09:11:29.108663] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:42:27.254 [2024-04-18 09:11:29.108725] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:42:27.254 [2024-04-18 09:11:29.108833] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:42:27.254 [2024-04-18 09:11:29.108893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.254 [2024-04-18 09:11:29.108981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:42:27.254 [2024-04-18 09:11:29.109027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.468 ms 00:42:27.254 [2024-04-18 09:11:29.109067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.254 [2024-04-18 09:11:29.138565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.254 [2024-04-18 09:11:29.138872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:27.254 [2024-04-18 09:11:29.138989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.318 ms 00:42:27.254 [2024-04-18 09:11:29.139039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.254 [2024-04-18 09:11:29.139231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.254 [2024-04-18 09:11:29.139324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:42:27.254 [2024-04-18 09:11:29.139428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:42:27.254 [2024-04-18 09:11:29.139529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.254 [2024-04-18 09:11:29.208097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.254 [2024-04-18 09:11:29.208414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:27.254 [2024-04-18 09:11:29.208519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.428 ms 00:42:27.254 [2024-04-18 09:11:29.208624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.254 [2024-04-18 09:11:29.208723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.254 [2024-04-18 09:11:29.208777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:27.254 [2024-04-18 09:11:29.208871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:42:27.254 [2024-04-18 09:11:29.208917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.254 [2024-04-18 09:11:29.209551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.254 [2024-04-18 09:11:29.209696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:27.254 [2024-04-18 09:11:29.209794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:42:27.254 [2024-04-18 09:11:29.209842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.254 [2024-04-18 09:11:29.210053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.254 [2024-04-18 09:11:29.210109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:27.255 [2024-04-18 09:11:29.210194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:42:27.255 [2024-04-18 09:11:29.210275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.255 [2024-04-18 09:11:29.250332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.255 [2024-04-18 09:11:29.250676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:27.255 [2024-04-18 09:11:29.250780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.985 ms 00:42:27.255 [2024-04-18 09:11:29.250878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.255 [2024-04-18 09:11:29.269148] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:42:27.255 [2024-04-18 09:11:29.288841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.255 [2024-04-18 09:11:29.289149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:42:27.255 [2024-04-18 09:11:29.289282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.739 ms 00:42:27.255 [2024-04-18 09:11:29.289394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.512 [2024-04-18 09:11:29.361019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.512 [2024-04-18 09:11:29.361311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:42:27.512 [2024-04-18 09:11:29.361446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.511 ms 00:42:27.512 [2024-04-18 09:11:29.361547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.512 [2024-04-18 09:11:29.361665] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:42:27.512 [2024-04-18 09:11:29.361776] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:42:31.728 [2024-04-18 09:11:33.745472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:31.728 [2024-04-18 09:11:33.745839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:42:31.728 [2024-04-18 09:11:33.746076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4383.759 ms 00:42:31.728 [2024-04-18 09:11:33.746228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:31.728 [2024-04-18 09:11:33.746669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:31.728 [2024-04-18 09:11:33.746847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:42:31.728 [2024-04-18 09:11:33.746998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:42:31.728 [2024-04-18 09:11:33.747065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:31.728 [2024-04-18 09:11:33.794202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:31.728 [2024-04-18 09:11:33.794572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:42:31.728 [2024-04-18 09:11:33.794694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.911 ms 00:42:31.728 [2024-04-18 09:11:33.794745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:31.986 [2024-04-18 09:11:33.842303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:31.986 [2024-04-18 09:11:33.842634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:42:31.986 [2024-04-18 09:11:33.842776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.429 ms 00:42:31.986 [2024-04-18 09:11:33.842818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:31.986 [2024-04-18 09:11:33.843514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:31.986 [2024-04-18 09:11:33.843670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:42:31.986 [2024-04-18 09:11:33.843789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:42:31.986 [2024-04-18 09:11:33.843836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:31.986 [2024-04-18 09:11:33.962656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:31.986 [2024-04-18 09:11:33.962971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:42:31.986 [2024-04-18 09:11:33.963201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.610 ms 00:42:31.986 [2024-04-18 09:11:33.963250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:31.986 [2024-04-18 09:11:34.015098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:31.986 [2024-04-18 09:11:34.015455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:42:31.986 [2024-04-18 09:11:34.015563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.725 ms 00:42:31.986 [2024-04-18 09:11:34.015609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:31.986 [2024-04-18 09:11:34.022158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:31.986 [2024-04-18 09:11:34.022543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:42:31.986 [2024-04-18 09:11:34.022741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.413 ms 00:42:31.986 [2024-04-18 09:11:34.022814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:31.986 [2024-04-18 09:11:34.069147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:31.986 [2024-04-18 09:11:34.069542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:42:31.986 [2024-04-18 09:11:34.069716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.108 ms 00:42:31.986 [2024-04-18 09:11:34.069785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:31.986 [2024-04-18 09:11:34.070044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:31.986 [2024-04-18 09:11:34.070134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:42:31.986 [2024-04-18 09:11:34.070383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:42:31.986 [2024-04-18 09:11:34.070459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:31.986 [2024-04-18 09:11:34.070752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:31.986 [2024-04-18 09:11:34.070836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:42:31.986 [2024-04-18 09:11:34.070977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:42:31.986 [2024-04-18 09:11:34.071115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:31.986 [2024-04-18 09:11:34.072796] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4991.899 ms, result 0 00:42:31.986 { 00:42:31.986 "name": "ftl0", 00:42:31.986 "uuid": "4f7ae9dd-0c91-4528-ab01-5e6b80a3ef70" 00:42:31.986 } 00:42:32.244 09:11:34 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:42:32.244 09:11:34 -- common/autotest_common.sh@885 -- # local bdev_name=ftl0 00:42:32.244 09:11:34 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:42:32.244 09:11:34 -- common/autotest_common.sh@887 -- # local i 00:42:32.244 09:11:34 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:42:32.244 09:11:34 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:42:32.244 09:11:34 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:32.501 09:11:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:42:32.758 [ 00:42:32.758 { 00:42:32.758 "name": "ftl0", 00:42:32.758 "aliases": [ 00:42:32.758 "4f7ae9dd-0c91-4528-ab01-5e6b80a3ef70" 00:42:32.758 ], 00:42:32.758 "product_name": "FTL disk", 00:42:32.758 "block_size": 4096, 00:42:32.758 "num_blocks": 20971520, 00:42:32.758 "uuid": "4f7ae9dd-0c91-4528-ab01-5e6b80a3ef70", 00:42:32.758 "assigned_rate_limits": { 00:42:32.758 "rw_ios_per_sec": 0, 00:42:32.758 "rw_mbytes_per_sec": 0, 00:42:32.758 "r_mbytes_per_sec": 0, 00:42:32.758 "w_mbytes_per_sec": 0 00:42:32.758 }, 00:42:32.758 "claimed": false, 00:42:32.758 "zoned": false, 00:42:32.758 "supported_io_types": { 00:42:32.758 "read": true, 00:42:32.758 "write": true, 00:42:32.758 "unmap": true, 00:42:32.758 "write_zeroes": true, 00:42:32.758 "flush": true, 00:42:32.758 "reset": false, 00:42:32.758 "compare": false, 00:42:32.758 "compare_and_write": false, 00:42:32.758 "abort": false, 00:42:32.758 "nvme_admin": false, 00:42:32.758 "nvme_io": false 00:42:32.758 }, 00:42:32.758 "driver_specific": { 00:42:32.758 "ftl": { 00:42:32.758 "base_bdev": "427357d9-6108-4677-8a95-24357e0ad567", 00:42:32.758 "cache": "nvc0n1p0" 00:42:32.758 } 00:42:32.758 } 00:42:32.758 } 00:42:32.758 ] 00:42:32.758 09:11:34 -- common/autotest_common.sh@893 -- # return 0 00:42:32.759 09:11:34 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:42:32.759 09:11:34 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:42:33.017 09:11:34 -- ftl/fio.sh@70 -- # echo ']}' 00:42:33.017 09:11:34 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:42:33.274 [2024-04-18 09:11:35.285692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:33.274 [2024-04-18 09:11:35.286012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:42:33.274 [2024-04-18 09:11:35.286149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:42:33.274 [2024-04-18 09:11:35.286201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:33.274 [2024-04-18 09:11:35.286354] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:42:33.274 [2024-04-18 09:11:35.291062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:33.274 [2024-04-18 09:11:35.291327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:42:33.274 [2024-04-18 09:11:35.291452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.536 ms 00:42:33.274 [2024-04-18 09:11:35.291504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:33.274 [2024-04-18 09:11:35.292170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:33.274 [2024-04-18 09:11:35.292323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:42:33.274 [2024-04-18 09:11:35.292446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:42:33.274 [2024-04-18 09:11:35.292505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:33.274 [2024-04-18 09:11:35.295684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:33.274 [2024-04-18 09:11:35.295879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:42:33.274 [2024-04-18 09:11:35.296002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.099 ms 00:42:33.274 [2024-04-18 09:11:35.296049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:33.274 [2024-04-18 09:11:35.302390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:33.274 [2024-04-18 09:11:35.302679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:42:33.274 [2024-04-18 09:11:35.302802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.197 ms 00:42:33.274 [2024-04-18 09:11:35.302912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:33.274 [2024-04-18 09:11:35.352940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:33.274 [2024-04-18 09:11:35.353222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:42:33.274 [2024-04-18 09:11:35.353324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.789 ms 00:42:33.274 [2024-04-18 09:11:35.353445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:33.532 [2024-04-18 09:11:35.381013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:33.532 [2024-04-18 09:11:35.381323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:42:33.532 [2024-04-18 09:11:35.381456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.390 ms 00:42:33.532 [2024-04-18 09:11:35.381509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:33.532 [2024-04-18 09:11:35.381878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:33.532 [2024-04-18 09:11:35.382042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:42:33.532 [2024-04-18 09:11:35.382168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:42:33.532 [2024-04-18 09:11:35.382218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:33.532 [2024-04-18 09:11:35.432106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:33.532 [2024-04-18 09:11:35.432413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:42:33.532 [2024-04-18 09:11:35.432519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.777 ms 00:42:33.532 [2024-04-18 09:11:35.432563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:33.532 [2024-04-18 09:11:35.482778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:33.532 [2024-04-18 09:11:35.483053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:42:33.532 [2024-04-18 09:11:35.483161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.027 ms 00:42:33.532 [2024-04-18 09:11:35.483257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:33.532 [2024-04-18 09:11:35.532091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:33.532 [2024-04-18 09:11:35.532442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:42:33.532 [2024-04-18 09:11:35.532551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.682 ms 00:42:33.532 [2024-04-18 09:11:35.532632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:33.532 [2024-04-18 09:11:35.579608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:33.532 [2024-04-18 09:11:35.579914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:42:33.532 [2024-04-18 09:11:35.580026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.694 ms 00:42:33.532 [2024-04-18 09:11:35.580134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:33.532 [2024-04-18 09:11:35.580244] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:42:33.532 [2024-04-18 09:11:35.580351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.580487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.580610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.580715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.580912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.581045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.581136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.581201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.581294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.581356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.581429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.581524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.581580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.581662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.581719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.581777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.581900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.582055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.582199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.582274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.582407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.582528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.582654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.582751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.582954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.583075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.583161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.583389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.583526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.583642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.583748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.583859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.583943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.584058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.584166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.584360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.584501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.584708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.584880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.585053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.585207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.585389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.585596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.585767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.585924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.586097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.586277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.586473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.586642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.586799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.586933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.587126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.587258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.587462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.587590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.587787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.587966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.588133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.588305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:42:33.532 [2024-04-18 09:11:35.588465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.588632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.588803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.588965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.589135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.589294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.589465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.589636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.589790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.589949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.590130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.590274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.590450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.590602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.590780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.590941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.591111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.591264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.591478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.591610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.591783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.591952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.592075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.592187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.592289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.592347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.592444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.592503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.592564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.592649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.592709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.592789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.592850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.592968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.593027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.593082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.593170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.593225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.593356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.593524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.593684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:42:33.533 [2024-04-18 09:11:35.593856] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:42:33.533 [2024-04-18 09:11:35.593981] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4f7ae9dd-0c91-4528-ab01-5e6b80a3ef70 00:42:33.533 [2024-04-18 09:11:35.594075] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:42:33.533 [2024-04-18 09:11:35.594119] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:42:33.533 [2024-04-18 09:11:35.594160] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:42:33.533 [2024-04-18 09:11:35.594256] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:42:33.533 [2024-04-18 09:11:35.594303] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:42:33.533 [2024-04-18 09:11:35.594356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:42:33.533 [2024-04-18 09:11:35.594484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:42:33.533 [2024-04-18 09:11:35.594572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:42:33.533 [2024-04-18 09:11:35.594617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:42:33.533 [2024-04-18 09:11:35.594688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:33.533 [2024-04-18 09:11:35.594731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:42:33.533 [2024-04-18 09:11:35.594774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.448 ms 00:42:33.533 [2024-04-18 09:11:35.594859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:33.533 [2024-04-18 09:11:35.619547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:33.533 [2024-04-18 09:11:35.619858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:42:33.533 [2024-04-18 09:11:35.619963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.493 ms 00:42:33.533 [2024-04-18 09:11:35.620011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:33.533 [2024-04-18 09:11:35.620508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:33.533 [2024-04-18 09:11:35.620640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:42:33.533 [2024-04-18 09:11:35.620739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:42:33.533 [2024-04-18 09:11:35.620782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:33.792 [2024-04-18 09:11:35.705301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:33.792 [2024-04-18 09:11:35.705598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:33.792 [2024-04-18 09:11:35.705710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:33.792 [2024-04-18 09:11:35.705754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:33.792 [2024-04-18 09:11:35.705905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:33.792 [2024-04-18 09:11:35.705993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:33.792 [2024-04-18 09:11:35.706092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:33.792 [2024-04-18 09:11:35.706168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:33.792 [2024-04-18 09:11:35.706361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:33.792 [2024-04-18 09:11:35.706477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:33.792 [2024-04-18 09:11:35.706562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:33.792 [2024-04-18 09:11:35.706646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:33.792 [2024-04-18 09:11:35.706723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:33.792 [2024-04-18 09:11:35.706763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:33.792 [2024-04-18 09:11:35.706842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:33.792 [2024-04-18 09:11:35.706933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:33.792 [2024-04-18 09:11:35.872345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:33.792 [2024-04-18 09:11:35.872578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:33.792 [2024-04-18 09:11:35.872681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:33.792 [2024-04-18 09:11:35.872729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.049 [2024-04-18 09:11:35.931396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:34.049 [2024-04-18 09:11:35.931676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:34.049 [2024-04-18 09:11:35.931796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:34.049 [2024-04-18 09:11:35.931885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.049 [2024-04-18 09:11:35.932089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:34.049 [2024-04-18 09:11:35.932207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:34.049 [2024-04-18 09:11:35.932299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:34.049 [2024-04-18 09:11:35.932348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.049 [2024-04-18 09:11:35.932530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:34.049 [2024-04-18 09:11:35.932591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:34.049 [2024-04-18 09:11:35.932690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:34.049 [2024-04-18 09:11:35.932775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.049 [2024-04-18 09:11:35.932980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:34.049 [2024-04-18 09:11:35.933089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:34.049 [2024-04-18 09:11:35.933179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:34.049 [2024-04-18 09:11:35.933227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.049 [2024-04-18 09:11:35.933389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:34.049 [2024-04-18 09:11:35.933489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:42:34.049 [2024-04-18 09:11:35.933578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:34.049 [2024-04-18 09:11:35.933660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.049 [2024-04-18 09:11:35.933765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:34.049 [2024-04-18 09:11:35.933812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:34.049 [2024-04-18 09:11:35.933905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:34.049 [2024-04-18 09:11:35.933952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.049 [2024-04-18 09:11:35.934080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:34.049 [2024-04-18 09:11:35.934232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:34.049 [2024-04-18 09:11:35.934322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:34.049 [2024-04-18 09:11:35.934382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.049 [2024-04-18 09:11:35.934625] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 648.891 ms, result 0 00:42:34.049 true 00:42:34.049 09:11:35 -- ftl/fio.sh@75 -- # killprocess 77526 00:42:34.049 09:11:35 -- common/autotest_common.sh@936 -- # '[' -z 77526 ']' 00:42:34.049 09:11:35 -- common/autotest_common.sh@940 -- # kill -0 77526 00:42:34.049 09:11:35 -- common/autotest_common.sh@941 -- # uname 00:42:34.049 09:11:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:42:34.049 09:11:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77526 00:42:34.049 killing process with pid 77526 00:42:34.049 09:11:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:42:34.049 09:11:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:42:34.049 09:11:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77526' 00:42:34.049 09:11:35 -- common/autotest_common.sh@955 -- # kill 77526 00:42:34.049 09:11:35 -- common/autotest_common.sh@960 -- # wait 77526 00:42:40.676 09:11:41 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:42:40.676 09:11:41 -- ftl/fio.sh@78 -- # for test in ${tests} 00:42:40.676 09:11:41 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:42:40.676 09:11:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:42:40.676 09:11:41 -- common/autotest_common.sh@10 -- # set +x 00:42:40.676 09:11:41 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:42:40.676 09:11:41 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:42:40.676 09:11:41 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:42:40.676 09:11:41 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:40.676 09:11:41 -- common/autotest_common.sh@1325 -- # local sanitizers 00:42:40.676 09:11:41 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:40.676 09:11:41 -- common/autotest_common.sh@1327 -- # shift 00:42:40.676 09:11:41 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:42:40.676 09:11:41 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:42:40.676 09:11:41 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:40.676 09:11:41 -- common/autotest_common.sh@1331 -- # grep libasan 00:42:40.676 09:11:41 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:42:40.676 09:11:41 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:42:40.676 09:11:41 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:42:40.676 09:11:41 -- common/autotest_common.sh@1333 -- # break 00:42:40.676 09:11:41 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:42:40.676 09:11:41 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:42:40.676 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:42:40.676 fio-3.35 00:42:40.676 Starting 1 thread 00:42:44.869 00:42:44.869 test: (groupid=0, jobs=1): err= 0: pid=77773: Thu Apr 18 09:11:46 2024 00:42:44.869 read: IOPS=1147, BW=76.2MiB/s (79.9MB/s)(255MiB/3341msec) 00:42:44.869 slat (usec): min=5, max=127, avg= 8.61, stdev= 5.24 00:42:44.869 clat (usec): min=270, max=6773, avg=384.59, stdev=136.48 00:42:44.869 lat (usec): min=281, max=6789, avg=393.21, stdev=137.03 00:42:44.869 clat percentiles (usec): 00:42:44.869 | 1.00th=[ 314], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 347], 00:42:44.869 | 30.00th=[ 355], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 371], 00:42:44.869 | 70.00th=[ 396], 80.00th=[ 424], 90.00th=[ 445], 95.00th=[ 478], 00:42:44.869 | 99.00th=[ 537], 99.50th=[ 594], 99.90th=[ 1123], 99.95th=[ 4883], 00:42:44.869 | 99.99th=[ 6783] 00:42:44.869 write: IOPS=1155, BW=76.7MiB/s (80.4MB/s)(256MiB/3338msec); 0 zone resets 00:42:44.869 slat (usec): min=18, max=128, avg=25.69, stdev= 8.82 00:42:44.870 clat (usec): min=314, max=1047, avg=435.91, stdev=58.20 00:42:44.870 lat (usec): min=354, max=1079, avg=461.60, stdev=58.83 00:42:44.870 clat percentiles (usec): 00:42:44.870 | 1.00th=[ 351], 5.00th=[ 367], 10.00th=[ 375], 20.00th=[ 383], 00:42:44.870 | 30.00th=[ 400], 40.00th=[ 424], 50.00th=[ 441], 60.00th=[ 449], 00:42:44.870 | 70.00th=[ 453], 80.00th=[ 465], 90.00th=[ 502], 95.00th=[ 529], 00:42:44.870 | 99.00th=[ 635], 99.50th=[ 701], 99.90th=[ 807], 99.95th=[ 1045], 00:42:44.870 | 99.99th=[ 1045] 00:42:44.870 bw ( KiB/s): min=76704, max=79832, per=99.90%, avg=78472.00, stdev=1118.18, samples=6 00:42:44.870 iops : min= 1128, max= 1174, avg=1154.00, stdev=16.44, samples=6 00:42:44.870 lat (usec) : 500=93.73%, 750=6.07%, 1000=0.12% 00:42:44.870 lat (msec) : 2=0.05%, 10=0.03% 00:42:44.870 cpu : usr=98.77%, sys=0.27%, ctx=22, majf=0, minf=1171 00:42:44.870 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:44.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.870 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.870 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:44.870 00:42:44.870 Run status group 0 (all jobs): 00:42:44.870 READ: bw=76.2MiB/s (79.9MB/s), 76.2MiB/s-76.2MiB/s (79.9MB/s-79.9MB/s), io=255MiB (267MB), run=3341-3341msec 00:42:44.870 WRITE: bw=76.7MiB/s (80.4MB/s), 76.7MiB/s-76.7MiB/s (80.4MB/s-80.4MB/s), io=256MiB (269MB), run=3338-3338msec 00:42:46.801 ----------------------------------------------------- 00:42:46.801 Suppressions used: 00:42:46.801 count bytes template 00:42:46.801 1 5 /usr/src/fio/parse.c 00:42:46.801 1 8 libtcmalloc_minimal.so 00:42:46.801 1 904 libcrypto.so 00:42:46.801 ----------------------------------------------------- 00:42:46.801 00:42:47.059 09:11:48 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:42:47.059 09:11:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:42:47.059 09:11:48 -- common/autotest_common.sh@10 -- # set +x 00:42:47.059 09:11:48 -- ftl/fio.sh@78 -- # for test in ${tests} 00:42:47.059 09:11:48 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:42:47.059 09:11:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:42:47.059 09:11:48 -- common/autotest_common.sh@10 -- # set +x 00:42:47.059 09:11:48 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:42:47.059 09:11:48 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:42:47.059 09:11:48 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:42:47.059 09:11:48 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:47.059 09:11:48 -- common/autotest_common.sh@1325 -- # local sanitizers 00:42:47.059 09:11:48 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:47.059 09:11:48 -- common/autotest_common.sh@1327 -- # shift 00:42:47.059 09:11:48 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:42:47.059 09:11:48 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:42:47.059 09:11:48 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:47.059 09:11:48 -- common/autotest_common.sh@1331 -- # grep libasan 00:42:47.059 09:11:48 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:42:47.059 09:11:49 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:42:47.059 09:11:49 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:42:47.059 09:11:49 -- common/autotest_common.sh@1333 -- # break 00:42:47.059 09:11:49 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:42:47.059 09:11:49 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:42:47.317 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:42:47.317 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:42:47.317 fio-3.35 00:42:47.317 Starting 2 threads 00:43:19.377 00:43:19.377 first_half: (groupid=0, jobs=1): err= 0: pid=77882: Thu Apr 18 09:12:17 2024 00:43:19.377 read: IOPS=2422, BW=9692KiB/s (9924kB/s)(255MiB/26926msec) 00:43:19.377 slat (usec): min=4, max=150, avg= 7.02, stdev= 2.16 00:43:19.377 clat (usec): min=819, max=365838, avg=40275.99, stdev=23578.94 00:43:19.377 lat (usec): min=827, max=365851, avg=40283.01, stdev=23579.07 00:43:19.377 clat percentiles (msec): 00:43:19.377 | 1.00th=[ 9], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:43:19.377 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:43:19.377 | 70.00th=[ 38], 80.00th=[ 41], 90.00th=[ 45], 95.00th=[ 54], 00:43:19.377 | 99.00th=[ 178], 99.50th=[ 192], 99.90th=[ 255], 99.95th=[ 309], 00:43:19.377 | 99.99th=[ 355] 00:43:19.377 write: IOPS=3106, BW=12.1MiB/s (12.7MB/s)(256MiB/21098msec); 0 zone resets 00:43:19.377 slat (usec): min=4, max=768, avg= 9.12, stdev= 7.81 00:43:19.377 clat (usec): min=405, max=127325, avg=12440.00, stdev=21569.89 00:43:19.377 lat (usec): min=423, max=127335, avg=12449.12, stdev=21570.09 00:43:19.377 clat percentiles (usec): 00:43:19.377 | 1.00th=[ 988], 5.00th=[ 1205], 10.00th=[ 1401], 20.00th=[ 1876], 00:43:19.377 | 30.00th=[ 3326], 40.00th=[ 5080], 50.00th=[ 6259], 60.00th=[ 7111], 00:43:19.377 | 70.00th=[ 8586], 80.00th=[ 12649], 90.00th=[ 16188], 95.00th=[ 79168], 00:43:19.377 | 99.00th=[ 98042], 99.50th=[102237], 99.90th=[120062], 99.95th=[124257], 00:43:19.377 | 99.99th=[126354] 00:43:19.377 bw ( KiB/s): min= 2808, max=40056, per=87.43%, avg=20971.20, stdev=10928.55, samples=25 00:43:19.377 iops : min= 702, max=10014, avg=5242.88, stdev=2732.22, samples=25 00:43:19.377 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.52% 00:43:19.377 lat (msec) : 2=10.49%, 4=6.17%, 10=20.49%, 20=8.80%, 50=46.54% 00:43:19.377 lat (msec) : 100=5.19%, 250=1.66%, 500=0.06% 00:43:19.377 cpu : usr=99.14%, sys=0.19%, ctx=162, majf=0, minf=5562 00:43:19.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:43:19.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:19.377 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:19.377 issued rwts: total=65240,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:19.377 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:19.377 second_half: (groupid=0, jobs=1): err= 0: pid=77883: Thu Apr 18 09:12:17 2024 00:43:19.377 read: IOPS=2410, BW=9640KiB/s (9872kB/s)(255MiB/27075msec) 00:43:19.377 slat (usec): min=4, max=676, avg= 6.94, stdev= 3.47 00:43:19.377 clat (usec): min=920, max=370685, avg=39646.35, stdev=24129.07 00:43:19.377 lat (usec): min=928, max=370693, avg=39653.29, stdev=24129.34 00:43:19.377 clat percentiles (msec): 00:43:19.377 | 1.00th=[ 9], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:43:19.377 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:43:19.377 | 70.00th=[ 37], 80.00th=[ 41], 90.00th=[ 44], 95.00th=[ 52], 00:43:19.377 | 99.00th=[ 174], 99.50th=[ 192], 99.90th=[ 279], 99.95th=[ 342], 00:43:19.377 | 99.99th=[ 368] 00:43:19.377 write: IOPS=2998, BW=11.7MiB/s (12.3MB/s)(256MiB/21857msec); 0 zone resets 00:43:19.377 slat (usec): min=5, max=262, avg= 9.08, stdev= 5.29 00:43:19.377 clat (usec): min=424, max=128036, avg=13380.37, stdev=22463.80 00:43:19.377 lat (usec): min=441, max=128043, avg=13389.45, stdev=22463.95 00:43:19.377 clat percentiles (usec): 00:43:19.377 | 1.00th=[ 938], 5.00th=[ 1156], 10.00th=[ 1369], 20.00th=[ 1811], 00:43:19.377 | 30.00th=[ 3228], 40.00th=[ 4752], 50.00th=[ 6063], 60.00th=[ 7177], 00:43:19.377 | 70.00th=[ 8979], 80.00th=[ 13304], 90.00th=[ 37487], 95.00th=[ 80217], 00:43:19.377 | 99.00th=[100140], 99.50th=[105382], 99.90th=[116917], 99.95th=[120062], 00:43:19.377 | 99.99th=[127402] 00:43:19.377 bw ( KiB/s): min= 1000, max=34272, per=80.94%, avg=19415.52, stdev=9969.50, samples=27 00:43:19.377 iops : min= 250, max= 8568, avg=4853.85, stdev=2492.33, samples=27 00:43:19.377 lat (usec) : 500=0.01%, 750=0.08%, 1000=0.77% 00:43:19.377 lat (msec) : 2=10.78%, 4=6.37%, 10=19.79%, 20=8.30%, 50=47.18% 00:43:19.377 lat (msec) : 100=4.77%, 250=1.90%, 500=0.06% 00:43:19.377 cpu : usr=99.05%, sys=0.19%, ctx=149, majf=0, minf=5558 00:43:19.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:43:19.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:19.377 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:19.377 issued rwts: total=65252,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:19.377 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:19.377 00:43:19.377 Run status group 0 (all jobs): 00:43:19.377 READ: bw=18.8MiB/s (19.7MB/s), 9640KiB/s-9692KiB/s (9872kB/s-9924kB/s), io=510MiB (534MB), run=26926-27075msec 00:43:19.377 WRITE: bw=23.4MiB/s (24.6MB/s), 11.7MiB/s-12.1MiB/s (12.3MB/s-12.7MB/s), io=512MiB (537MB), run=21098-21857msec 00:43:19.377 ----------------------------------------------------- 00:43:19.377 Suppressions used: 00:43:19.377 count bytes template 00:43:19.377 2 10 /usr/src/fio/parse.c 00:43:19.377 2 192 /usr/src/fio/iolog.c 00:43:19.377 1 8 libtcmalloc_minimal.so 00:43:19.377 1 904 libcrypto.so 00:43:19.377 ----------------------------------------------------- 00:43:19.377 00:43:19.377 09:12:20 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:43:19.377 09:12:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:43:19.377 09:12:20 -- common/autotest_common.sh@10 -- # set +x 00:43:19.377 09:12:20 -- ftl/fio.sh@78 -- # for test in ${tests} 00:43:19.377 09:12:20 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:43:19.377 09:12:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:43:19.377 09:12:20 -- common/autotest_common.sh@10 -- # set +x 00:43:19.377 09:12:20 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:43:19.377 09:12:20 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:43:19.377 09:12:20 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:43:19.377 09:12:20 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:19.377 09:12:20 -- common/autotest_common.sh@1325 -- # local sanitizers 00:43:19.377 09:12:20 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:19.377 09:12:20 -- common/autotest_common.sh@1327 -- # shift 00:43:19.377 09:12:20 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:43:19.377 09:12:20 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:43:19.377 09:12:20 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:19.377 09:12:20 -- common/autotest_common.sh@1331 -- # grep libasan 00:43:19.377 09:12:20 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:43:19.377 09:12:20 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:19.377 09:12:20 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:19.377 09:12:20 -- common/autotest_common.sh@1333 -- # break 00:43:19.377 09:12:20 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:43:19.377 09:12:20 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:43:19.377 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:43:19.377 fio-3.35 00:43:19.377 Starting 1 thread 00:43:37.497 00:43:37.497 test: (groupid=0, jobs=1): err= 0: pid=78235: Thu Apr 18 09:12:36 2024 00:43:37.497 read: IOPS=6967, BW=27.2MiB/s (28.5MB/s)(255MiB/9358msec) 00:43:37.497 slat (nsec): min=3851, max=71849, avg=5976.74, stdev=1590.69 00:43:37.497 clat (usec): min=715, max=34275, avg=18360.74, stdev=1236.16 00:43:37.497 lat (usec): min=719, max=34281, avg=18366.72, stdev=1236.23 00:43:37.497 clat percentiles (usec): 00:43:37.497 | 1.00th=[16909], 5.00th=[17171], 10.00th=[17171], 20.00th=[17433], 00:43:37.497 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:43:37.497 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19268], 95.00th=[20841], 00:43:37.497 | 99.00th=[22676], 99.50th=[23462], 99.90th=[26084], 99.95th=[30016], 00:43:37.497 | 99.99th=[33424] 00:43:37.497 write: IOPS=12.7k, BW=49.8MiB/s (52.2MB/s)(256MiB/5141msec); 0 zone resets 00:43:37.497 slat (usec): min=4, max=681, avg= 8.43, stdev= 5.66 00:43:37.497 clat (usec): min=553, max=60830, avg=9987.14, stdev=12454.50 00:43:37.497 lat (usec): min=562, max=60839, avg=9995.56, stdev=12454.51 00:43:37.497 clat percentiles (usec): 00:43:37.498 | 1.00th=[ 906], 5.00th=[ 1057], 10.00th=[ 1172], 20.00th=[ 1336], 00:43:37.498 | 30.00th=[ 1516], 40.00th=[ 2024], 50.00th=[ 6521], 60.00th=[ 7635], 00:43:37.498 | 70.00th=[ 8848], 80.00th=[10683], 90.00th=[35914], 95.00th=[39060], 00:43:37.498 | 99.00th=[43779], 99.50th=[44827], 99.90th=[46924], 99.95th=[49546], 00:43:37.498 | 99.99th=[56361] 00:43:37.498 bw ( KiB/s): min=11504, max=68848, per=93.47%, avg=47662.55, stdev=14902.53, samples=11 00:43:37.498 iops : min= 2876, max=17212, avg=11915.64, stdev=3725.63, samples=11 00:43:37.498 lat (usec) : 750=0.05%, 1000=1.43% 00:43:37.498 lat (msec) : 2=18.54%, 4=1.01%, 10=17.72%, 20=49.48%, 50=11.76% 00:43:37.498 lat (msec) : 100=0.02% 00:43:37.498 cpu : usr=98.88%, sys=0.34%, ctx=112, majf=0, minf=5566 00:43:37.498 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:43:37.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:37.498 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:37.498 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:37.498 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:37.498 00:43:37.498 Run status group 0 (all jobs): 00:43:37.498 READ: bw=27.2MiB/s (28.5MB/s), 27.2MiB/s-27.2MiB/s (28.5MB/s-28.5MB/s), io=255MiB (267MB), run=9358-9358msec 00:43:37.498 WRITE: bw=49.8MiB/s (52.2MB/s), 49.8MiB/s-49.8MiB/s (52.2MB/s-52.2MB/s), io=256MiB (268MB), run=5141-5141msec 00:43:37.498 ----------------------------------------------------- 00:43:37.498 Suppressions used: 00:43:37.498 count bytes template 00:43:37.498 1 5 /usr/src/fio/parse.c 00:43:37.498 2 192 /usr/src/fio/iolog.c 00:43:37.498 1 8 libtcmalloc_minimal.so 00:43:37.498 1 904 libcrypto.so 00:43:37.498 ----------------------------------------------------- 00:43:37.498 00:43:37.498 09:12:39 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:43:37.498 09:12:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:43:37.498 09:12:39 -- common/autotest_common.sh@10 -- # set +x 00:43:37.498 09:12:39 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:43:37.498 09:12:39 -- ftl/fio.sh@85 -- # remove_shm 00:43:37.498 09:12:39 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:43:37.498 Remove shared memory files 00:43:37.498 09:12:39 -- ftl/common.sh@205 -- # rm -f rm -f 00:43:37.498 09:12:39 -- ftl/common.sh@206 -- # rm -f rm -f 00:43:37.498 09:12:39 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid61480 /dev/shm/spdk_tgt_trace.pid76373 00:43:37.498 09:12:39 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:43:37.498 09:12:39 -- ftl/common.sh@209 -- # rm -f rm -f 00:43:37.498 ************************************ 00:43:37.498 END TEST ftl_fio_basic 00:43:37.498 ************************************ 00:43:37.498 00:43:37.498 real 1m15.418s 00:43:37.498 user 2m46.448s 00:43:37.498 sys 0m4.251s 00:43:37.498 09:12:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:43:37.498 09:12:39 -- common/autotest_common.sh@10 -- # set +x 00:43:37.498 09:12:39 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:43:37.498 09:12:39 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:43:37.498 09:12:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:43:37.498 09:12:39 -- common/autotest_common.sh@10 -- # set +x 00:43:37.498 ************************************ 00:43:37.498 START TEST ftl_bdevperf 00:43:37.498 ************************************ 00:43:37.498 09:12:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:43:37.498 * Looking for test storage... 00:43:37.498 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:43:37.498 09:12:39 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:43:37.498 09:12:39 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:43:37.498 09:12:39 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:43:37.498 09:12:39 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:43:37.498 09:12:39 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:43:37.498 09:12:39 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:43:37.498 09:12:39 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:37.498 09:12:39 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:43:37.498 09:12:39 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:43:37.498 09:12:39 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:37.498 09:12:39 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:37.498 09:12:39 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:43:37.498 09:12:39 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:43:37.498 09:12:39 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:43:37.498 09:12:39 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:43:37.498 09:12:39 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:43:37.498 09:12:39 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:43:37.498 09:12:39 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:37.498 09:12:39 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:37.498 09:12:39 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:43:37.498 09:12:39 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:43:37.498 09:12:39 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:43:37.498 09:12:39 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:43:37.498 09:12:39 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:43:37.498 09:12:39 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:43:37.498 09:12:39 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:43:37.498 09:12:39 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:43:37.498 09:12:39 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:37.498 09:12:39 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:37.498 09:12:39 -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:43:37.498 09:12:39 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:43:37.498 09:12:39 -- ftl/bdevperf.sh@13 -- # use_append= 00:43:37.498 09:12:39 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:37.498 09:12:39 -- ftl/bdevperf.sh@15 -- # timeout=240 00:43:37.498 09:12:39 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:43:37.498 09:12:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:43:37.498 09:12:39 -- common/autotest_common.sh@10 -- # set +x 00:43:37.498 09:12:39 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=78478 00:43:37.498 09:12:39 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:43:37.498 09:12:39 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:43:37.498 09:12:39 -- ftl/bdevperf.sh@22 -- # waitforlisten 78478 00:43:37.498 09:12:39 -- common/autotest_common.sh@817 -- # '[' -z 78478 ']' 00:43:37.498 09:12:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:37.498 09:12:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:43:37.498 09:12:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:37.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:37.498 09:12:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:43:37.498 09:12:39 -- common/autotest_common.sh@10 -- # set +x 00:43:37.498 [2024-04-18 09:12:39.469140] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:43:37.498 [2024-04-18 09:12:39.469544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78478 ] 00:43:37.756 [2024-04-18 09:12:39.656353] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:38.014 [2024-04-18 09:12:39.960073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:38.579 09:12:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:43:38.579 09:12:40 -- common/autotest_common.sh@850 -- # return 0 00:43:38.579 09:12:40 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:43:38.579 09:12:40 -- ftl/common.sh@54 -- # local name=nvme0 00:43:38.579 09:12:40 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:43:38.579 09:12:40 -- ftl/common.sh@56 -- # local size=103424 00:43:38.579 09:12:40 -- ftl/common.sh@59 -- # local base_bdev 00:43:38.579 09:12:40 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:43:38.837 09:12:40 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:43:38.837 09:12:40 -- ftl/common.sh@62 -- # local base_size 00:43:38.837 09:12:40 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:43:38.837 09:12:40 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:43:38.837 09:12:40 -- common/autotest_common.sh@1365 -- # local bdev_info 00:43:38.837 09:12:40 -- common/autotest_common.sh@1366 -- # local bs 00:43:38.837 09:12:40 -- common/autotest_common.sh@1367 -- # local nb 00:43:38.837 09:12:40 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:43:39.096 09:12:41 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:43:39.096 { 00:43:39.096 "name": "nvme0n1", 00:43:39.096 "aliases": [ 00:43:39.096 "13d38642-ae20-418a-bca4-7e7c5ae348d6" 00:43:39.096 ], 00:43:39.096 "product_name": "NVMe disk", 00:43:39.096 "block_size": 4096, 00:43:39.096 "num_blocks": 1310720, 00:43:39.096 "uuid": "13d38642-ae20-418a-bca4-7e7c5ae348d6", 00:43:39.096 "assigned_rate_limits": { 00:43:39.096 "rw_ios_per_sec": 0, 00:43:39.096 "rw_mbytes_per_sec": 0, 00:43:39.096 "r_mbytes_per_sec": 0, 00:43:39.096 "w_mbytes_per_sec": 0 00:43:39.096 }, 00:43:39.096 "claimed": true, 00:43:39.096 "claim_type": "read_many_write_one", 00:43:39.096 "zoned": false, 00:43:39.096 "supported_io_types": { 00:43:39.096 "read": true, 00:43:39.096 "write": true, 00:43:39.096 "unmap": true, 00:43:39.096 "write_zeroes": true, 00:43:39.096 "flush": true, 00:43:39.096 "reset": true, 00:43:39.096 "compare": true, 00:43:39.096 "compare_and_write": false, 00:43:39.096 "abort": true, 00:43:39.096 "nvme_admin": true, 00:43:39.096 "nvme_io": true 00:43:39.096 }, 00:43:39.096 "driver_specific": { 00:43:39.096 "nvme": [ 00:43:39.096 { 00:43:39.096 "pci_address": "0000:00:11.0", 00:43:39.096 "trid": { 00:43:39.096 "trtype": "PCIe", 00:43:39.096 "traddr": "0000:00:11.0" 00:43:39.096 }, 00:43:39.096 "ctrlr_data": { 00:43:39.096 "cntlid": 0, 00:43:39.096 "vendor_id": "0x1b36", 00:43:39.096 "model_number": "QEMU NVMe Ctrl", 00:43:39.096 "serial_number": "12341", 00:43:39.096 "firmware_revision": "8.0.0", 00:43:39.096 "subnqn": "nqn.2019-08.org.qemu:12341", 00:43:39.096 "oacs": { 00:43:39.096 "security": 0, 00:43:39.096 "format": 1, 00:43:39.096 "firmware": 0, 00:43:39.096 "ns_manage": 1 00:43:39.096 }, 00:43:39.096 "multi_ctrlr": false, 00:43:39.096 "ana_reporting": false 00:43:39.096 }, 00:43:39.096 "vs": { 00:43:39.096 "nvme_version": "1.4" 00:43:39.096 }, 00:43:39.096 "ns_data": { 00:43:39.096 "id": 1, 00:43:39.096 "can_share": false 00:43:39.096 } 00:43:39.096 } 00:43:39.096 ], 00:43:39.096 "mp_policy": "active_passive" 00:43:39.096 } 00:43:39.096 } 00:43:39.096 ]' 00:43:39.096 09:12:41 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:43:39.096 09:12:41 -- common/autotest_common.sh@1369 -- # bs=4096 00:43:39.096 09:12:41 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:43:39.096 09:12:41 -- common/autotest_common.sh@1370 -- # nb=1310720 00:43:39.096 09:12:41 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:43:39.096 09:12:41 -- common/autotest_common.sh@1374 -- # echo 5120 00:43:39.096 09:12:41 -- ftl/common.sh@63 -- # base_size=5120 00:43:39.096 09:12:41 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:43:39.096 09:12:41 -- ftl/common.sh@67 -- # clear_lvols 00:43:39.096 09:12:41 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:43:39.096 09:12:41 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:43:39.354 09:12:41 -- ftl/common.sh@28 -- # stores=b716af86-9225-4092-8dfc-43febcee341e 00:43:39.354 09:12:41 -- ftl/common.sh@29 -- # for lvs in $stores 00:43:39.354 09:12:41 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b716af86-9225-4092-8dfc-43febcee341e 00:43:39.613 09:12:41 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:43:39.871 09:12:41 -- ftl/common.sh@68 -- # lvs=57c49b0b-9de2-4486-b37d-4fd13b077424 00:43:39.871 09:12:41 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 57c49b0b-9de2-4486-b37d-4fd13b077424 00:43:40.131 09:12:42 -- ftl/bdevperf.sh@23 -- # split_bdev=e12cf37c-1c56-4037-add6-60350093afd2 00:43:40.131 09:12:42 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e12cf37c-1c56-4037-add6-60350093afd2 00:43:40.131 09:12:42 -- ftl/common.sh@35 -- # local name=nvc0 00:43:40.131 09:12:42 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:43:40.131 09:12:42 -- ftl/common.sh@37 -- # local base_bdev=e12cf37c-1c56-4037-add6-60350093afd2 00:43:40.131 09:12:42 -- ftl/common.sh@38 -- # local cache_size= 00:43:40.131 09:12:42 -- ftl/common.sh@41 -- # get_bdev_size e12cf37c-1c56-4037-add6-60350093afd2 00:43:40.131 09:12:42 -- common/autotest_common.sh@1364 -- # local bdev_name=e12cf37c-1c56-4037-add6-60350093afd2 00:43:40.131 09:12:42 -- common/autotest_common.sh@1365 -- # local bdev_info 00:43:40.131 09:12:42 -- common/autotest_common.sh@1366 -- # local bs 00:43:40.131 09:12:42 -- common/autotest_common.sh@1367 -- # local nb 00:43:40.131 09:12:42 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e12cf37c-1c56-4037-add6-60350093afd2 00:43:40.389 09:12:42 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:43:40.389 { 00:43:40.389 "name": "e12cf37c-1c56-4037-add6-60350093afd2", 00:43:40.389 "aliases": [ 00:43:40.389 "lvs/nvme0n1p0" 00:43:40.389 ], 00:43:40.389 "product_name": "Logical Volume", 00:43:40.389 "block_size": 4096, 00:43:40.389 "num_blocks": 26476544, 00:43:40.389 "uuid": "e12cf37c-1c56-4037-add6-60350093afd2", 00:43:40.389 "assigned_rate_limits": { 00:43:40.389 "rw_ios_per_sec": 0, 00:43:40.389 "rw_mbytes_per_sec": 0, 00:43:40.389 "r_mbytes_per_sec": 0, 00:43:40.389 "w_mbytes_per_sec": 0 00:43:40.389 }, 00:43:40.389 "claimed": false, 00:43:40.389 "zoned": false, 00:43:40.389 "supported_io_types": { 00:43:40.389 "read": true, 00:43:40.389 "write": true, 00:43:40.389 "unmap": true, 00:43:40.389 "write_zeroes": true, 00:43:40.389 "flush": false, 00:43:40.389 "reset": true, 00:43:40.389 "compare": false, 00:43:40.389 "compare_and_write": false, 00:43:40.389 "abort": false, 00:43:40.389 "nvme_admin": false, 00:43:40.389 "nvme_io": false 00:43:40.389 }, 00:43:40.389 "driver_specific": { 00:43:40.389 "lvol": { 00:43:40.389 "lvol_store_uuid": "57c49b0b-9de2-4486-b37d-4fd13b077424", 00:43:40.389 "base_bdev": "nvme0n1", 00:43:40.389 "thin_provision": true, 00:43:40.389 "snapshot": false, 00:43:40.389 "clone": false, 00:43:40.389 "esnap_clone": false 00:43:40.389 } 00:43:40.389 } 00:43:40.389 } 00:43:40.389 ]' 00:43:40.389 09:12:42 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:43:40.647 09:12:42 -- common/autotest_common.sh@1369 -- # bs=4096 00:43:40.647 09:12:42 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:43:40.647 09:12:42 -- common/autotest_common.sh@1370 -- # nb=26476544 00:43:40.647 09:12:42 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:43:40.647 09:12:42 -- common/autotest_common.sh@1374 -- # echo 103424 00:43:40.647 09:12:42 -- ftl/common.sh@41 -- # local base_size=5171 00:43:40.647 09:12:42 -- ftl/common.sh@44 -- # local nvc_bdev 00:43:40.647 09:12:42 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:43:40.906 09:12:42 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:43:40.906 09:12:42 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:43:40.906 09:12:42 -- ftl/common.sh@48 -- # get_bdev_size e12cf37c-1c56-4037-add6-60350093afd2 00:43:40.906 09:12:42 -- common/autotest_common.sh@1364 -- # local bdev_name=e12cf37c-1c56-4037-add6-60350093afd2 00:43:40.906 09:12:42 -- common/autotest_common.sh@1365 -- # local bdev_info 00:43:40.906 09:12:42 -- common/autotest_common.sh@1366 -- # local bs 00:43:40.906 09:12:42 -- common/autotest_common.sh@1367 -- # local nb 00:43:40.906 09:12:42 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e12cf37c-1c56-4037-add6-60350093afd2 00:43:41.164 09:12:43 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:43:41.164 { 00:43:41.164 "name": "e12cf37c-1c56-4037-add6-60350093afd2", 00:43:41.164 "aliases": [ 00:43:41.164 "lvs/nvme0n1p0" 00:43:41.164 ], 00:43:41.164 "product_name": "Logical Volume", 00:43:41.164 "block_size": 4096, 00:43:41.164 "num_blocks": 26476544, 00:43:41.164 "uuid": "e12cf37c-1c56-4037-add6-60350093afd2", 00:43:41.164 "assigned_rate_limits": { 00:43:41.164 "rw_ios_per_sec": 0, 00:43:41.164 "rw_mbytes_per_sec": 0, 00:43:41.164 "r_mbytes_per_sec": 0, 00:43:41.164 "w_mbytes_per_sec": 0 00:43:41.164 }, 00:43:41.164 "claimed": false, 00:43:41.164 "zoned": false, 00:43:41.164 "supported_io_types": { 00:43:41.164 "read": true, 00:43:41.164 "write": true, 00:43:41.164 "unmap": true, 00:43:41.164 "write_zeroes": true, 00:43:41.164 "flush": false, 00:43:41.164 "reset": true, 00:43:41.164 "compare": false, 00:43:41.164 "compare_and_write": false, 00:43:41.164 "abort": false, 00:43:41.164 "nvme_admin": false, 00:43:41.164 "nvme_io": false 00:43:41.164 }, 00:43:41.164 "driver_specific": { 00:43:41.164 "lvol": { 00:43:41.164 "lvol_store_uuid": "57c49b0b-9de2-4486-b37d-4fd13b077424", 00:43:41.164 "base_bdev": "nvme0n1", 00:43:41.164 "thin_provision": true, 00:43:41.164 "snapshot": false, 00:43:41.164 "clone": false, 00:43:41.164 "esnap_clone": false 00:43:41.164 } 00:43:41.164 } 00:43:41.164 } 00:43:41.164 ]' 00:43:41.164 09:12:43 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:43:41.164 09:12:43 -- common/autotest_common.sh@1369 -- # bs=4096 00:43:41.164 09:12:43 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:43:41.164 09:12:43 -- common/autotest_common.sh@1370 -- # nb=26476544 00:43:41.164 09:12:43 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:43:41.164 09:12:43 -- common/autotest_common.sh@1374 -- # echo 103424 00:43:41.164 09:12:43 -- ftl/common.sh@48 -- # cache_size=5171 00:43:41.164 09:12:43 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:43:41.422 09:12:43 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:43:41.422 09:12:43 -- ftl/bdevperf.sh@26 -- # get_bdev_size e12cf37c-1c56-4037-add6-60350093afd2 00:43:41.422 09:12:43 -- common/autotest_common.sh@1364 -- # local bdev_name=e12cf37c-1c56-4037-add6-60350093afd2 00:43:41.422 09:12:43 -- common/autotest_common.sh@1365 -- # local bdev_info 00:43:41.422 09:12:43 -- common/autotest_common.sh@1366 -- # local bs 00:43:41.422 09:12:43 -- common/autotest_common.sh@1367 -- # local nb 00:43:41.422 09:12:43 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e12cf37c-1c56-4037-add6-60350093afd2 00:43:41.680 09:12:43 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:43:41.680 { 00:43:41.680 "name": "e12cf37c-1c56-4037-add6-60350093afd2", 00:43:41.680 "aliases": [ 00:43:41.680 "lvs/nvme0n1p0" 00:43:41.680 ], 00:43:41.680 "product_name": "Logical Volume", 00:43:41.680 "block_size": 4096, 00:43:41.680 "num_blocks": 26476544, 00:43:41.680 "uuid": "e12cf37c-1c56-4037-add6-60350093afd2", 00:43:41.680 "assigned_rate_limits": { 00:43:41.680 "rw_ios_per_sec": 0, 00:43:41.680 "rw_mbytes_per_sec": 0, 00:43:41.680 "r_mbytes_per_sec": 0, 00:43:41.680 "w_mbytes_per_sec": 0 00:43:41.680 }, 00:43:41.680 "claimed": false, 00:43:41.680 "zoned": false, 00:43:41.680 "supported_io_types": { 00:43:41.680 "read": true, 00:43:41.680 "write": true, 00:43:41.680 "unmap": true, 00:43:41.680 "write_zeroes": true, 00:43:41.680 "flush": false, 00:43:41.680 "reset": true, 00:43:41.680 "compare": false, 00:43:41.680 "compare_and_write": false, 00:43:41.680 "abort": false, 00:43:41.680 "nvme_admin": false, 00:43:41.680 "nvme_io": false 00:43:41.680 }, 00:43:41.680 "driver_specific": { 00:43:41.680 "lvol": { 00:43:41.680 "lvol_store_uuid": "57c49b0b-9de2-4486-b37d-4fd13b077424", 00:43:41.680 "base_bdev": "nvme0n1", 00:43:41.680 "thin_provision": true, 00:43:41.680 "snapshot": false, 00:43:41.680 "clone": false, 00:43:41.680 "esnap_clone": false 00:43:41.680 } 00:43:41.680 } 00:43:41.680 } 00:43:41.680 ]' 00:43:41.680 09:12:43 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:43:41.680 09:12:43 -- common/autotest_common.sh@1369 -- # bs=4096 00:43:41.680 09:12:43 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:43:41.680 09:12:43 -- common/autotest_common.sh@1370 -- # nb=26476544 00:43:41.680 09:12:43 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:43:41.680 09:12:43 -- common/autotest_common.sh@1374 -- # echo 103424 00:43:41.680 09:12:43 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:43:41.680 09:12:43 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e12cf37c-1c56-4037-add6-60350093afd2 -c nvc0n1p0 --l2p_dram_limit 20 00:43:41.938 [2024-04-18 09:12:44.031366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:41.938 [2024-04-18 09:12:44.031635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:43:41.938 [2024-04-18 09:12:44.031739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:43:41.938 [2024-04-18 09:12:44.031786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.938 [2024-04-18 09:12:44.031930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:41.938 [2024-04-18 09:12:44.032042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:41.938 [2024-04-18 09:12:44.032086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:43:41.938 [2024-04-18 09:12:44.032123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.938 [2024-04-18 09:12:44.032240] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:43:41.938 [2024-04-18 09:12:44.033759] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:43:41.938 [2024-04-18 09:12:44.033920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:41.938 [2024-04-18 09:12:44.034008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:41.938 [2024-04-18 09:12:44.034048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.699 ms 00:43:41.938 [2024-04-18 09:12:44.034086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.938 [2024-04-18 09:12:44.034418] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d86e0aa5-e3a9-45ca-8fbf-353c5578e30e 00:43:41.938 [2024-04-18 09:12:44.035994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:41.938 [2024-04-18 09:12:44.036141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:43:41.938 [2024-04-18 09:12:44.036245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:43:41.939 [2024-04-18 09:12:44.036287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:42.198 [2024-04-18 09:12:44.044258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:42.198 [2024-04-18 09:12:44.044499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:42.198 [2024-04-18 09:12:44.044606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.852 ms 00:43:42.198 [2024-04-18 09:12:44.044648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:42.198 [2024-04-18 09:12:44.044860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:42.198 [2024-04-18 09:12:44.044908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:42.198 [2024-04-18 09:12:44.044948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:43:42.198 [2024-04-18 09:12:44.045028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:42.198 [2024-04-18 09:12:44.045149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:42.198 [2024-04-18 09:12:44.045191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:43:42.198 [2024-04-18 09:12:44.045237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:43:42.198 [2024-04-18 09:12:44.045330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:42.198 [2024-04-18 09:12:44.045409] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:43:42.198 [2024-04-18 09:12:44.052587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:42.198 [2024-04-18 09:12:44.052749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:42.198 [2024-04-18 09:12:44.052839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.191 ms 00:43:42.198 [2024-04-18 09:12:44.052890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:42.198 [2024-04-18 09:12:44.053012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:42.198 [2024-04-18 09:12:44.053059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:43:42.198 [2024-04-18 09:12:44.053095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:43:42.198 [2024-04-18 09:12:44.053199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:42.198 [2024-04-18 09:12:44.053277] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:43:42.198 [2024-04-18 09:12:44.053447] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:43:42.198 [2024-04-18 09:12:44.053513] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:43:42.198 [2024-04-18 09:12:44.053630] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:43:42.198 [2024-04-18 09:12:44.053692] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:43:42.198 [2024-04-18 09:12:44.053800] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:43:42.198 [2024-04-18 09:12:44.053858] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:43:42.198 [2024-04-18 09:12:44.053895] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:43:42.198 [2024-04-18 09:12:44.053982] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:43:42.198 [2024-04-18 09:12:44.054019] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:43:42.198 [2024-04-18 09:12:44.054091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:42.198 [2024-04-18 09:12:44.054136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:43:42.198 [2024-04-18 09:12:44.054172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:43:42.198 [2024-04-18 09:12:44.054258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:42.198 [2024-04-18 09:12:44.054388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:42.198 [2024-04-18 09:12:44.054434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:43:42.198 [2024-04-18 09:12:44.054469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:43:42.198 [2024-04-18 09:12:44.054573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:42.198 [2024-04-18 09:12:44.054689] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:43:42.198 [2024-04-18 09:12:44.054730] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:43:42.198 [2024-04-18 09:12:44.054765] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:42.198 [2024-04-18 09:12:44.054849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:42.198 [2024-04-18 09:12:44.054903] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:43:42.198 [2024-04-18 09:12:44.054939] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:43:42.198 [2024-04-18 09:12:44.054972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:43:42.198 [2024-04-18 09:12:44.055009] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:43:42.198 [2024-04-18 09:12:44.055056] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:43:42.198 [2024-04-18 09:12:44.055139] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:42.198 [2024-04-18 09:12:44.055179] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:43:42.198 [2024-04-18 09:12:44.055215] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:43:42.198 [2024-04-18 09:12:44.055248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:42.198 [2024-04-18 09:12:44.055288] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:43:42.198 [2024-04-18 09:12:44.055321] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:43:42.198 [2024-04-18 09:12:44.055432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:42.198 [2024-04-18 09:12:44.055469] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:43:42.198 [2024-04-18 09:12:44.055505] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:43:42.198 [2024-04-18 09:12:44.055539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:42.198 [2024-04-18 09:12:44.055578] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:43:42.198 [2024-04-18 09:12:44.055612] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:43:42.198 [2024-04-18 09:12:44.055705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:43:42.198 [2024-04-18 09:12:44.055761] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:43:42.198 [2024-04-18 09:12:44.055797] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:43:42.198 [2024-04-18 09:12:44.055830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:43:42.198 [2024-04-18 09:12:44.055897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:43:42.198 [2024-04-18 09:12:44.055933] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:43:42.198 [2024-04-18 09:12:44.056019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:43:42.198 [2024-04-18 09:12:44.056059] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:43:42.198 [2024-04-18 09:12:44.056096] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:43:42.198 [2024-04-18 09:12:44.056131] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:43:42.198 [2024-04-18 09:12:44.056168] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:43:42.198 [2024-04-18 09:12:44.056203] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:43:42.198 [2024-04-18 09:12:44.056307] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:43:42.198 [2024-04-18 09:12:44.056362] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:43:42.198 [2024-04-18 09:12:44.056414] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:43:42.198 [2024-04-18 09:12:44.056450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:42.198 [2024-04-18 09:12:44.056488] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:43:42.198 [2024-04-18 09:12:44.056522] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:43:42.198 [2024-04-18 09:12:44.056589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:42.198 [2024-04-18 09:12:44.056658] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:43:42.198 [2024-04-18 09:12:44.056717] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:43:42.198 [2024-04-18 09:12:44.056771] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:42.198 [2024-04-18 09:12:44.056814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:42.198 [2024-04-18 09:12:44.056850] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:43:42.198 [2024-04-18 09:12:44.056887] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:43:42.198 [2024-04-18 09:12:44.056921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:43:42.198 [2024-04-18 09:12:44.057010] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:43:42.198 [2024-04-18 09:12:44.057063] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:43:42.198 [2024-04-18 09:12:44.057121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:43:42.198 [2024-04-18 09:12:44.057157] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:43:42.198 [2024-04-18 09:12:44.057223] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:42.198 [2024-04-18 09:12:44.057349] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:43:42.199 [2024-04-18 09:12:44.057441] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:43:42.199 [2024-04-18 09:12:44.057558] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:43:42.199 [2024-04-18 09:12:44.057622] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:43:42.199 [2024-04-18 09:12:44.057729] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:43:42.199 [2024-04-18 09:12:44.057828] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:43:42.199 [2024-04-18 09:12:44.057884] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:43:42.199 [2024-04-18 09:12:44.058014] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:43:42.199 [2024-04-18 09:12:44.058068] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:43:42.199 [2024-04-18 09:12:44.058124] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:43:42.199 [2024-04-18 09:12:44.058225] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:43:42.199 [2024-04-18 09:12:44.058289] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:43:42.199 [2024-04-18 09:12:44.058344] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:43:42.199 [2024-04-18 09:12:44.058462] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:43:42.199 [2024-04-18 09:12:44.058520] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:42.199 [2024-04-18 09:12:44.058621] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:43:42.199 [2024-04-18 09:12:44.058678] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:43:42.199 [2024-04-18 09:12:44.058799] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:43:42.199 [2024-04-18 09:12:44.058854] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:43:42.199 [2024-04-18 09:12:44.058951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:42.199 [2024-04-18 09:12:44.059047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:43:42.199 [2024-04-18 09:12:44.059096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.300 ms 00:43:42.199 [2024-04-18 09:12:44.059166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:42.199 [2024-04-18 09:12:44.088756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:42.199 [2024-04-18 09:12:44.089026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:42.199 [2024-04-18 09:12:44.089122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.462 ms 00:43:42.199 [2024-04-18 09:12:44.089164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:42.199 [2024-04-18 09:12:44.089302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:42.199 [2024-04-18 09:12:44.089340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:43:42.199 [2024-04-18 09:12:44.089459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:43:42.199 [2024-04-18 09:12:44.089507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:42.199 [2024-04-18 09:12:44.160843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:42.199 [2024-04-18 09:12:44.161101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:42.199 [2024-04-18 09:12:44.161205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.227 ms 00:43:42.199 [2024-04-18 09:12:44.161248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:42.199 [2024-04-18 09:12:44.161344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:42.199 [2024-04-18 09:12:44.161394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:42.199 [2024-04-18 09:12:44.161439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:43:42.199 [2024-04-18 09:12:44.161527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:42.199 [2024-04-18 09:12:44.162086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:42.199 [2024-04-18 09:12:44.162134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:42.199 [2024-04-18 09:12:44.162254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:43:42.199 [2024-04-18 09:12:44.162333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:42.199 [2024-04-18 09:12:44.162504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:42.199 [2024-04-18 09:12:44.162593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:42.199 [2024-04-18 09:12:44.162642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:43:42.199 [2024-04-18 09:12:44.162676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:42.199 [2024-04-18 09:12:44.189530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:42.199 [2024-04-18 09:12:44.189766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:42.199 [2024-04-18 09:12:44.189862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.683 ms 00:43:42.199 [2024-04-18 09:12:44.189903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:42.199 [2024-04-18 09:12:44.207788] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:43:42.199 [2024-04-18 09:12:44.214584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:42.199 [2024-04-18 09:12:44.214821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:43:42.199 [2024-04-18 09:12:44.214953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.529 ms 00:43:42.199 [2024-04-18 09:12:44.215003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:42.199 [2024-04-18 09:12:44.290108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:42.199 [2024-04-18 09:12:44.290398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:43:42.199 [2024-04-18 09:12:44.290498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.024 ms 00:43:42.199 [2024-04-18 09:12:44.290546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:42.199 [2024-04-18 09:12:44.290629] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:43:42.199 [2024-04-18 09:12:44.290802] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:43:44.723 [2024-04-18 09:12:46.540598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:44.723 [2024-04-18 09:12:46.540911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:43:44.723 [2024-04-18 09:12:46.541032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2249.951 ms 00:43:44.723 [2024-04-18 09:12:46.541086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.723 [2024-04-18 09:12:46.541460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:44.723 [2024-04-18 09:12:46.541594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:43:44.723 [2024-04-18 09:12:46.541682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:43:44.723 [2024-04-18 09:12:46.541778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.723 [2024-04-18 09:12:46.589092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:44.723 [2024-04-18 09:12:46.589474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:43:44.723 [2024-04-18 09:12:46.589643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.188 ms 00:43:44.723 [2024-04-18 09:12:46.589799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.723 [2024-04-18 09:12:46.635849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:44.723 [2024-04-18 09:12:46.636156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:43:44.723 [2024-04-18 09:12:46.636292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.893 ms 00:43:44.723 [2024-04-18 09:12:46.636336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.723 [2024-04-18 09:12:46.637096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:44.723 [2024-04-18 09:12:46.637247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:43:44.723 [2024-04-18 09:12:46.637337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:43:44.723 [2024-04-18 09:12:46.637439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.723 [2024-04-18 09:12:46.753649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:44.723 [2024-04-18 09:12:46.753925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:43:44.723 [2024-04-18 09:12:46.754054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.085 ms 00:43:44.723 [2024-04-18 09:12:46.754102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.724 [2024-04-18 09:12:46.803365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:44.724 [2024-04-18 09:12:46.803666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:43:44.724 [2024-04-18 09:12:46.803774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.096 ms 00:43:44.724 [2024-04-18 09:12:46.803821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.724 [2024-04-18 09:12:46.806350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:44.724 [2024-04-18 09:12:46.806524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:43:44.724 [2024-04-18 09:12:46.806612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.428 ms 00:43:44.724 [2024-04-18 09:12:46.806661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.981 [2024-04-18 09:12:46.856283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:44.981 [2024-04-18 09:12:46.856557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:43:44.981 [2024-04-18 09:12:46.856654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.462 ms 00:43:44.981 [2024-04-18 09:12:46.856701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.982 [2024-04-18 09:12:46.856882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:44.982 [2024-04-18 09:12:46.856942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:43:44.982 [2024-04-18 09:12:46.856983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:43:44.982 [2024-04-18 09:12:46.857027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.982 [2024-04-18 09:12:46.857231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:44.982 [2024-04-18 09:12:46.857283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:43:44.982 [2024-04-18 09:12:46.857324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:43:44.982 [2024-04-18 09:12:46.857466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.982 [2024-04-18 09:12:46.858705] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2826.787 ms, result 0 00:43:44.982 { 00:43:44.982 "name": "ftl0", 00:43:44.982 "uuid": "d86e0aa5-e3a9-45ca-8fbf-353c5578e30e" 00:43:44.982 } 00:43:44.982 09:12:46 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:43:44.982 09:12:46 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:43:44.982 09:12:46 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:43:45.239 09:12:47 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:43:45.239 [2024-04-18 09:12:47.283087] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:43:45.239 I/O size of 69632 is greater than zero copy threshold (65536). 00:43:45.240 Zero copy mechanism will not be used. 00:43:45.240 Running I/O for 4 seconds... 00:43:49.423 00:43:49.423 Latency(us) 00:43:49.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:49.423 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:43:49.423 ftl0 : 4.00 2358.79 156.64 0.00 0.00 444.22 199.92 2059.70 00:43:49.423 =================================================================================================================== 00:43:49.423 Total : 2358.79 156.64 0.00 0.00 444.22 199.92 2059.70 00:43:49.423 [2024-04-18 09:12:51.296228] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:43:49.423 0 00:43:49.423 09:12:51 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:43:49.423 [2024-04-18 09:12:51.428284] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:43:49.423 Running I/O for 4 seconds... 00:43:53.609 00:43:53.609 Latency(us) 00:43:53.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:53.609 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:43:53.609 ftl0 : 4.02 8513.20 33.25 0.00 0.00 14997.53 276.97 31457.28 00:43:53.609 =================================================================================================================== 00:43:53.609 Total : 8513.20 33.25 0.00 0.00 14997.53 0.00 31457.28 00:43:53.609 [2024-04-18 09:12:55.460773] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft0 00:43:53.609 l0 00:43:53.609 09:12:55 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:43:53.609 [2024-04-18 09:12:55.582662] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:43:53.609 Running I/O for 4 seconds... 00:43:57.814 00:43:57.814 Latency(us) 00:43:57.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:57.814 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:57.814 Verification LBA range: start 0x0 length 0x1400000 00:43:57.814 ftl0 : 4.01 7076.49 27.64 0.00 0.00 18027.41 294.52 27962.03 00:43:57.814 =================================================================================================================== 00:43:57.814 Total : 7076.49 27.64 0.00 0.00 18027.41 0.00 27962.03 00:43:57.814 [2024-04-18 09:12:59.618404] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft0 00:43:57.814 l0 00:43:57.814 09:12:59 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:43:58.073 [2024-04-18 09:12:59.925109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.073 [2024-04-18 09:12:59.925436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:43:58.073 [2024-04-18 09:12:59.925566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:43:58.073 [2024-04-18 09:12:59.925730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.073 [2024-04-18 09:12:59.925813] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:43:58.073 [2024-04-18 09:12:59.930152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.073 [2024-04-18 09:12:59.930339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:43:58.073 [2024-04-18 09:12:59.930500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.058 ms 00:43:58.073 [2024-04-18 09:12:59.930548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.073 [2024-04-18 09:12:59.932197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.073 [2024-04-18 09:12:59.932397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:43:58.073 [2024-04-18 09:12:59.932525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.502 ms 00:43:58.073 [2024-04-18 09:12:59.932643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.073 [2024-04-18 09:13:00.112602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.073 [2024-04-18 09:13:00.112860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:43:58.073 [2024-04-18 09:13:00.113001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 179.875 ms 00:43:58.073 [2024-04-18 09:13:00.113123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.073 [2024-04-18 09:13:00.119623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.073 [2024-04-18 09:13:00.119826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:43:58.073 [2024-04-18 09:13:00.119958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.398 ms 00:43:58.073 [2024-04-18 09:13:00.120009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.073 [2024-04-18 09:13:00.168069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.073 [2024-04-18 09:13:00.168307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:43:58.073 [2024-04-18 09:13:00.168496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.910 ms 00:43:58.073 [2024-04-18 09:13:00.168548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.332 [2024-04-18 09:13:00.196394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.333 [2024-04-18 09:13:00.196619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:43:58.333 [2024-04-18 09:13:00.196749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.646 ms 00:43:58.333 [2024-04-18 09:13:00.196858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.333 [2024-04-18 09:13:00.197133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.333 [2024-04-18 09:13:00.197198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:43:58.333 [2024-04-18 09:13:00.197310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:43:58.333 [2024-04-18 09:13:00.197362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.333 [2024-04-18 09:13:00.244775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.333 [2024-04-18 09:13:00.245043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:43:58.333 [2024-04-18 09:13:00.245171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.333 ms 00:43:58.333 [2024-04-18 09:13:00.245341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.333 [2024-04-18 09:13:00.293002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.333 [2024-04-18 09:13:00.293300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:43:58.333 [2024-04-18 09:13:00.293462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.463 ms 00:43:58.333 [2024-04-18 09:13:00.293511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.333 [2024-04-18 09:13:00.338281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.333 [2024-04-18 09:13:00.338594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:43:58.333 [2024-04-18 09:13:00.338751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.599 ms 00:43:58.333 [2024-04-18 09:13:00.338814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.333 [2024-04-18 09:13:00.384929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.333 [2024-04-18 09:13:00.385232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:43:58.333 [2024-04-18 09:13:00.385364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.835 ms 00:43:58.333 [2024-04-18 09:13:00.385432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.333 [2024-04-18 09:13:00.385594] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:43:58.333 [2024-04-18 09:13:00.385656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.385909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.385968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.386126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.386184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.386296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.386461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.386686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.386814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.386891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.387034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.387112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.387230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.387365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.387562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.387688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.387761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.387923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.387988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.388126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.388352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.388455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.388629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.388705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.388772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.388940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.389018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.389085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.389226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.389309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.389452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.389595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.389797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.389934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.390048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.390169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.390296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.390427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.390580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.390773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.390904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.391116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.391246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.391335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.391437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.391590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.391659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.391782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.391948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.392114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.392242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.392408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.392560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.392698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.392775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.392913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.392998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.393131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.393261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.393505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.393622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.393826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.393962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.394038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.394203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.394337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.394418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.394552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.394616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:43:58.333 [2024-04-18 09:13:00.394723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.394859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.394984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.395113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.395264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.395465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.395608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.395748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.395904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.396055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.396200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.396351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.396533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.396695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.396860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.397015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.397183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.397337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.397519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.397658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.397824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.397974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.398132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.398288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.398480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.398629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.398697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.398798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.398870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.398988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.399070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:43:58.334 [2024-04-18 09:13:00.399221] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:43:58.334 [2024-04-18 09:13:00.399341] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d86e0aa5-e3a9-45ca-8fbf-353c5578e30e 00:43:58.334 [2024-04-18 09:13:00.399533] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:43:58.334 [2024-04-18 09:13:00.399648] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:43:58.334 [2024-04-18 09:13:00.399699] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:43:58.334 [2024-04-18 09:13:00.399762] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:43:58.334 [2024-04-18 09:13:00.399856] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:43:58.334 [2024-04-18 09:13:00.400018] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:43:58.334 [2024-04-18 09:13:00.400129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:43:58.334 [2024-04-18 09:13:00.400183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:43:58.334 [2024-04-18 09:13:00.400259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:43:58.334 [2024-04-18 09:13:00.400382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.334 [2024-04-18 09:13:00.400496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:43:58.334 [2024-04-18 09:13:00.400610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.777 ms 00:43:58.334 [2024-04-18 09:13:00.400755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.334 [2024-04-18 09:13:00.421174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.334 [2024-04-18 09:13:00.421438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:43:58.334 [2024-04-18 09:13:00.421552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.212 ms 00:43:58.334 [2024-04-18 09:13:00.421601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.334 [2024-04-18 09:13:00.422080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:58.334 [2024-04-18 09:13:00.422210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:43:58.334 [2024-04-18 09:13:00.422328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:43:58.334 [2024-04-18 09:13:00.422455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.593 [2024-04-18 09:13:00.487334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:58.593 [2024-04-18 09:13:00.487634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:58.593 [2024-04-18 09:13:00.487776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:58.593 [2024-04-18 09:13:00.487825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.593 [2024-04-18 09:13:00.487942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:58.593 [2024-04-18 09:13:00.488038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:58.593 [2024-04-18 09:13:00.488098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:58.593 [2024-04-18 09:13:00.488139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.593 [2024-04-18 09:13:00.488283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:58.593 [2024-04-18 09:13:00.488356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:58.593 [2024-04-18 09:13:00.488437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:58.593 [2024-04-18 09:13:00.488481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.593 [2024-04-18 09:13:00.488535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:58.593 [2024-04-18 09:13:00.488572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:58.593 [2024-04-18 09:13:00.488612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:58.593 [2024-04-18 09:13:00.488693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.593 [2024-04-18 09:13:00.622357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:58.593 [2024-04-18 09:13:00.622657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:58.593 [2024-04-18 09:13:00.622842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:58.593 [2024-04-18 09:13:00.622895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.593 [2024-04-18 09:13:00.679047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:58.593 [2024-04-18 09:13:00.679307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:58.593 [2024-04-18 09:13:00.679456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:58.593 [2024-04-18 09:13:00.679504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.593 [2024-04-18 09:13:00.679633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:58.593 [2024-04-18 09:13:00.679716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:58.593 [2024-04-18 09:13:00.679794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:58.593 [2024-04-18 09:13:00.679832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.593 [2024-04-18 09:13:00.679939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:58.593 [2024-04-18 09:13:00.679984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:58.593 [2024-04-18 09:13:00.680026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:58.593 [2024-04-18 09:13:00.680145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.593 [2024-04-18 09:13:00.680340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:58.593 [2024-04-18 09:13:00.680417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:58.593 [2024-04-18 09:13:00.680519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:58.593 [2024-04-18 09:13:00.680583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.593 [2024-04-18 09:13:00.680675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:58.593 [2024-04-18 09:13:00.680722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:43:58.593 [2024-04-18 09:13:00.680760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:58.593 [2024-04-18 09:13:00.680793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.593 [2024-04-18 09:13:00.680968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:58.593 [2024-04-18 09:13:00.681031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:58.593 [2024-04-18 09:13:00.681068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:58.593 [2024-04-18 09:13:00.681170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.593 [2024-04-18 09:13:00.681254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:58.593 [2024-04-18 09:13:00.681367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:58.593 [2024-04-18 09:13:00.681436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:58.593 [2024-04-18 09:13:00.681536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:58.593 [2024-04-18 09:13:00.681742] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 756.591 ms, result 0 00:43:58.593 true 00:43:58.852 09:13:00 -- ftl/bdevperf.sh@37 -- # killprocess 78478 00:43:58.852 09:13:00 -- common/autotest_common.sh@936 -- # '[' -z 78478 ']' 00:43:58.852 09:13:00 -- common/autotest_common.sh@940 -- # kill -0 78478 00:43:58.852 09:13:00 -- common/autotest_common.sh@941 -- # uname 00:43:58.852 09:13:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:43:58.852 09:13:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78478 00:43:58.852 killing process with pid 78478 00:43:58.852 Received shutdown signal, test time was about 4.000000 seconds 00:43:58.852 00:43:58.852 Latency(us) 00:43:58.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:58.852 =================================================================================================================== 00:43:58.852 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:58.852 09:13:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:43:58.852 09:13:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:43:58.852 09:13:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78478' 00:43:58.852 09:13:00 -- common/autotest_common.sh@955 -- # kill 78478 00:43:58.852 09:13:00 -- common/autotest_common.sh@960 -- # wait 78478 00:44:03.034 09:13:04 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:44:03.034 09:13:04 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:44:03.034 09:13:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:44:03.034 09:13:04 -- common/autotest_common.sh@10 -- # set +x 00:44:03.034 09:13:04 -- ftl/bdevperf.sh@41 -- # remove_shm 00:44:03.034 09:13:04 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:44:03.034 Remove shared memory files 00:44:03.034 09:13:04 -- ftl/common.sh@205 -- # rm -f rm -f 00:44:03.034 09:13:04 -- ftl/common.sh@206 -- # rm -f rm -f 00:44:03.034 09:13:04 -- ftl/common.sh@207 -- # rm -f rm -f 00:44:03.034 09:13:04 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:44:03.034 09:13:04 -- ftl/common.sh@209 -- # rm -f rm -f 00:44:03.034 ************************************ 00:44:03.034 END TEST ftl_bdevperf 00:44:03.034 ************************************ 00:44:03.034 00:44:03.034 real 0m25.750s 00:44:03.034 user 0m28.881s 00:44:03.034 sys 0m1.380s 00:44:03.034 09:13:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:44:03.034 09:13:04 -- common/autotest_common.sh@10 -- # set +x 00:44:03.034 09:13:05 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:44:03.034 09:13:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:44:03.034 09:13:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:44:03.034 09:13:05 -- common/autotest_common.sh@10 -- # set +x 00:44:03.034 ************************************ 00:44:03.034 START TEST ftl_trim 00:44:03.034 ************************************ 00:44:03.034 09:13:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:44:03.292 * Looking for test storage... 00:44:03.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:44:03.292 09:13:05 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:44:03.293 09:13:05 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:44:03.293 09:13:05 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:44:03.293 09:13:05 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:44:03.293 09:13:05 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:44:03.293 09:13:05 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:44:03.293 09:13:05 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:03.293 09:13:05 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:44:03.293 09:13:05 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:44:03.293 09:13:05 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:03.293 09:13:05 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:03.293 09:13:05 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:44:03.293 09:13:05 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:44:03.293 09:13:05 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:44:03.293 09:13:05 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:44:03.293 09:13:05 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:44:03.293 09:13:05 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:44:03.293 09:13:05 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:03.293 09:13:05 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:03.293 09:13:05 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:44:03.293 09:13:05 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:44:03.293 09:13:05 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:44:03.293 09:13:05 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:44:03.293 09:13:05 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:44:03.293 09:13:05 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:44:03.293 09:13:05 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:44:03.293 09:13:05 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:44:03.293 09:13:05 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:03.293 09:13:05 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:03.293 09:13:05 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:03.293 09:13:05 -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:44:03.293 09:13:05 -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:44:03.293 09:13:05 -- ftl/trim.sh@25 -- # timeout=240 00:44:03.293 09:13:05 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:44:03.293 09:13:05 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:44:03.293 09:13:05 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:44:03.293 09:13:05 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:44:03.293 09:13:05 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:44:03.293 09:13:05 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:03.293 09:13:05 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:03.293 09:13:05 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:44:03.293 09:13:05 -- ftl/trim.sh@40 -- # svcpid=78849 00:44:03.293 09:13:05 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:44:03.293 09:13:05 -- ftl/trim.sh@41 -- # waitforlisten 78849 00:44:03.293 09:13:05 -- common/autotest_common.sh@817 -- # '[' -z 78849 ']' 00:44:03.293 09:13:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:03.293 09:13:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:44:03.293 09:13:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:03.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:03.293 09:13:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:44:03.293 09:13:05 -- common/autotest_common.sh@10 -- # set +x 00:44:03.293 [2024-04-18 09:13:05.363058] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:44:03.293 [2024-04-18 09:13:05.363450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78849 ] 00:44:03.551 [2024-04-18 09:13:05.547569] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:03.809 [2024-04-18 09:13:05.903070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:03.809 [2024-04-18 09:13:05.903227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:03.809 [2024-04-18 09:13:05.903257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:05.182 09:13:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:44:05.182 09:13:07 -- common/autotest_common.sh@850 -- # return 0 00:44:05.182 09:13:07 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:44:05.182 09:13:07 -- ftl/common.sh@54 -- # local name=nvme0 00:44:05.182 09:13:07 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:44:05.182 09:13:07 -- ftl/common.sh@56 -- # local size=103424 00:44:05.182 09:13:07 -- ftl/common.sh@59 -- # local base_bdev 00:44:05.182 09:13:07 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:44:05.441 09:13:07 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:44:05.441 09:13:07 -- ftl/common.sh@62 -- # local base_size 00:44:05.441 09:13:07 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:44:05.441 09:13:07 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:44:05.441 09:13:07 -- common/autotest_common.sh@1365 -- # local bdev_info 00:44:05.441 09:13:07 -- common/autotest_common.sh@1366 -- # local bs 00:44:05.441 09:13:07 -- common/autotest_common.sh@1367 -- # local nb 00:44:05.441 09:13:07 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:44:05.699 09:13:07 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:44:05.699 { 00:44:05.699 "name": "nvme0n1", 00:44:05.699 "aliases": [ 00:44:05.699 "5e4e166c-da82-46c7-9c4d-54ea2f2a32e8" 00:44:05.699 ], 00:44:05.699 "product_name": "NVMe disk", 00:44:05.699 "block_size": 4096, 00:44:05.699 "num_blocks": 1310720, 00:44:05.699 "uuid": "5e4e166c-da82-46c7-9c4d-54ea2f2a32e8", 00:44:05.699 "assigned_rate_limits": { 00:44:05.699 "rw_ios_per_sec": 0, 00:44:05.699 "rw_mbytes_per_sec": 0, 00:44:05.699 "r_mbytes_per_sec": 0, 00:44:05.699 "w_mbytes_per_sec": 0 00:44:05.699 }, 00:44:05.699 "claimed": true, 00:44:05.699 "claim_type": "read_many_write_one", 00:44:05.699 "zoned": false, 00:44:05.699 "supported_io_types": { 00:44:05.699 "read": true, 00:44:05.699 "write": true, 00:44:05.699 "unmap": true, 00:44:05.699 "write_zeroes": true, 00:44:05.699 "flush": true, 00:44:05.699 "reset": true, 00:44:05.699 "compare": true, 00:44:05.699 "compare_and_write": false, 00:44:05.699 "abort": true, 00:44:05.699 "nvme_admin": true, 00:44:05.699 "nvme_io": true 00:44:05.699 }, 00:44:05.699 "driver_specific": { 00:44:05.699 "nvme": [ 00:44:05.699 { 00:44:05.699 "pci_address": "0000:00:11.0", 00:44:05.699 "trid": { 00:44:05.699 "trtype": "PCIe", 00:44:05.699 "traddr": "0000:00:11.0" 00:44:05.699 }, 00:44:05.699 "ctrlr_data": { 00:44:05.699 "cntlid": 0, 00:44:05.699 "vendor_id": "0x1b36", 00:44:05.699 "model_number": "QEMU NVMe Ctrl", 00:44:05.699 "serial_number": "12341", 00:44:05.699 "firmware_revision": "8.0.0", 00:44:05.699 "subnqn": "nqn.2019-08.org.qemu:12341", 00:44:05.699 "oacs": { 00:44:05.699 "security": 0, 00:44:05.699 "format": 1, 00:44:05.699 "firmware": 0, 00:44:05.699 "ns_manage": 1 00:44:05.699 }, 00:44:05.699 "multi_ctrlr": false, 00:44:05.699 "ana_reporting": false 00:44:05.699 }, 00:44:05.699 "vs": { 00:44:05.699 "nvme_version": "1.4" 00:44:05.699 }, 00:44:05.699 "ns_data": { 00:44:05.699 "id": 1, 00:44:05.699 "can_share": false 00:44:05.699 } 00:44:05.699 } 00:44:05.699 ], 00:44:05.699 "mp_policy": "active_passive" 00:44:05.699 } 00:44:05.699 } 00:44:05.699 ]' 00:44:05.699 09:13:07 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:44:05.699 09:13:07 -- common/autotest_common.sh@1369 -- # bs=4096 00:44:05.699 09:13:07 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:44:05.699 09:13:07 -- common/autotest_common.sh@1370 -- # nb=1310720 00:44:05.699 09:13:07 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:44:05.699 09:13:07 -- common/autotest_common.sh@1374 -- # echo 5120 00:44:05.699 09:13:07 -- ftl/common.sh@63 -- # base_size=5120 00:44:05.699 09:13:07 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:44:05.699 09:13:07 -- ftl/common.sh@67 -- # clear_lvols 00:44:05.699 09:13:07 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:44:05.699 09:13:07 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:44:05.957 09:13:07 -- ftl/common.sh@28 -- # stores=57c49b0b-9de2-4486-b37d-4fd13b077424 00:44:05.957 09:13:07 -- ftl/common.sh@29 -- # for lvs in $stores 00:44:05.957 09:13:07 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 57c49b0b-9de2-4486-b37d-4fd13b077424 00:44:06.215 09:13:08 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:44:06.473 09:13:08 -- ftl/common.sh@68 -- # lvs=4ab457df-a210-4b27-81eb-0cb3fc6bd50c 00:44:06.473 09:13:08 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4ab457df-a210-4b27-81eb-0cb3fc6bd50c 00:44:06.731 09:13:08 -- ftl/trim.sh@43 -- # split_bdev=1e9fb707-9173-4004-9d5e-6bd4145ad629 00:44:06.731 09:13:08 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1e9fb707-9173-4004-9d5e-6bd4145ad629 00:44:06.731 09:13:08 -- ftl/common.sh@35 -- # local name=nvc0 00:44:06.731 09:13:08 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:44:06.731 09:13:08 -- ftl/common.sh@37 -- # local base_bdev=1e9fb707-9173-4004-9d5e-6bd4145ad629 00:44:06.731 09:13:08 -- ftl/common.sh@38 -- # local cache_size= 00:44:06.731 09:13:08 -- ftl/common.sh@41 -- # get_bdev_size 1e9fb707-9173-4004-9d5e-6bd4145ad629 00:44:06.731 09:13:08 -- common/autotest_common.sh@1364 -- # local bdev_name=1e9fb707-9173-4004-9d5e-6bd4145ad629 00:44:06.731 09:13:08 -- common/autotest_common.sh@1365 -- # local bdev_info 00:44:06.732 09:13:08 -- common/autotest_common.sh@1366 -- # local bs 00:44:06.732 09:13:08 -- common/autotest_common.sh@1367 -- # local nb 00:44:06.732 09:13:08 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1e9fb707-9173-4004-9d5e-6bd4145ad629 00:44:06.990 09:13:08 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:44:06.990 { 00:44:06.990 "name": "1e9fb707-9173-4004-9d5e-6bd4145ad629", 00:44:06.990 "aliases": [ 00:44:06.990 "lvs/nvme0n1p0" 00:44:06.990 ], 00:44:06.990 "product_name": "Logical Volume", 00:44:06.990 "block_size": 4096, 00:44:06.990 "num_blocks": 26476544, 00:44:06.990 "uuid": "1e9fb707-9173-4004-9d5e-6bd4145ad629", 00:44:06.990 "assigned_rate_limits": { 00:44:06.990 "rw_ios_per_sec": 0, 00:44:06.990 "rw_mbytes_per_sec": 0, 00:44:06.991 "r_mbytes_per_sec": 0, 00:44:06.991 "w_mbytes_per_sec": 0 00:44:06.991 }, 00:44:06.991 "claimed": false, 00:44:06.991 "zoned": false, 00:44:06.991 "supported_io_types": { 00:44:06.991 "read": true, 00:44:06.991 "write": true, 00:44:06.991 "unmap": true, 00:44:06.991 "write_zeroes": true, 00:44:06.991 "flush": false, 00:44:06.991 "reset": true, 00:44:06.991 "compare": false, 00:44:06.991 "compare_and_write": false, 00:44:06.991 "abort": false, 00:44:06.991 "nvme_admin": false, 00:44:06.991 "nvme_io": false 00:44:06.991 }, 00:44:06.991 "driver_specific": { 00:44:06.991 "lvol": { 00:44:06.991 "lvol_store_uuid": "4ab457df-a210-4b27-81eb-0cb3fc6bd50c", 00:44:06.991 "base_bdev": "nvme0n1", 00:44:06.991 "thin_provision": true, 00:44:06.991 "snapshot": false, 00:44:06.991 "clone": false, 00:44:06.991 "esnap_clone": false 00:44:06.991 } 00:44:06.991 } 00:44:06.991 } 00:44:06.991 ]' 00:44:06.991 09:13:08 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:44:06.991 09:13:08 -- common/autotest_common.sh@1369 -- # bs=4096 00:44:06.991 09:13:08 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:44:06.991 09:13:08 -- common/autotest_common.sh@1370 -- # nb=26476544 00:44:06.991 09:13:08 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:44:06.991 09:13:08 -- common/autotest_common.sh@1374 -- # echo 103424 00:44:06.991 09:13:08 -- ftl/common.sh@41 -- # local base_size=5171 00:44:06.991 09:13:08 -- ftl/common.sh@44 -- # local nvc_bdev 00:44:06.991 09:13:08 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:44:07.250 09:13:09 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:44:07.250 09:13:09 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:44:07.250 09:13:09 -- ftl/common.sh@48 -- # get_bdev_size 1e9fb707-9173-4004-9d5e-6bd4145ad629 00:44:07.250 09:13:09 -- common/autotest_common.sh@1364 -- # local bdev_name=1e9fb707-9173-4004-9d5e-6bd4145ad629 00:44:07.250 09:13:09 -- common/autotest_common.sh@1365 -- # local bdev_info 00:44:07.250 09:13:09 -- common/autotest_common.sh@1366 -- # local bs 00:44:07.250 09:13:09 -- common/autotest_common.sh@1367 -- # local nb 00:44:07.250 09:13:09 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1e9fb707-9173-4004-9d5e-6bd4145ad629 00:44:07.816 09:13:09 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:44:07.816 { 00:44:07.816 "name": "1e9fb707-9173-4004-9d5e-6bd4145ad629", 00:44:07.816 "aliases": [ 00:44:07.816 "lvs/nvme0n1p0" 00:44:07.816 ], 00:44:07.816 "product_name": "Logical Volume", 00:44:07.816 "block_size": 4096, 00:44:07.816 "num_blocks": 26476544, 00:44:07.816 "uuid": "1e9fb707-9173-4004-9d5e-6bd4145ad629", 00:44:07.816 "assigned_rate_limits": { 00:44:07.816 "rw_ios_per_sec": 0, 00:44:07.816 "rw_mbytes_per_sec": 0, 00:44:07.816 "r_mbytes_per_sec": 0, 00:44:07.816 "w_mbytes_per_sec": 0 00:44:07.816 }, 00:44:07.816 "claimed": false, 00:44:07.816 "zoned": false, 00:44:07.816 "supported_io_types": { 00:44:07.816 "read": true, 00:44:07.816 "write": true, 00:44:07.816 "unmap": true, 00:44:07.816 "write_zeroes": true, 00:44:07.816 "flush": false, 00:44:07.816 "reset": true, 00:44:07.816 "compare": false, 00:44:07.816 "compare_and_write": false, 00:44:07.816 "abort": false, 00:44:07.816 "nvme_admin": false, 00:44:07.816 "nvme_io": false 00:44:07.816 }, 00:44:07.816 "driver_specific": { 00:44:07.816 "lvol": { 00:44:07.816 "lvol_store_uuid": "4ab457df-a210-4b27-81eb-0cb3fc6bd50c", 00:44:07.816 "base_bdev": "nvme0n1", 00:44:07.816 "thin_provision": true, 00:44:07.816 "snapshot": false, 00:44:07.816 "clone": false, 00:44:07.816 "esnap_clone": false 00:44:07.816 } 00:44:07.816 } 00:44:07.816 } 00:44:07.816 ]' 00:44:07.816 09:13:09 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:44:07.816 09:13:09 -- common/autotest_common.sh@1369 -- # bs=4096 00:44:07.816 09:13:09 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:44:07.816 09:13:09 -- common/autotest_common.sh@1370 -- # nb=26476544 00:44:07.816 09:13:09 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:44:07.816 09:13:09 -- common/autotest_common.sh@1374 -- # echo 103424 00:44:07.816 09:13:09 -- ftl/common.sh@48 -- # cache_size=5171 00:44:07.816 09:13:09 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:44:08.074 09:13:09 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:44:08.074 09:13:09 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:44:08.074 09:13:09 -- ftl/trim.sh@47 -- # get_bdev_size 1e9fb707-9173-4004-9d5e-6bd4145ad629 00:44:08.074 09:13:09 -- common/autotest_common.sh@1364 -- # local bdev_name=1e9fb707-9173-4004-9d5e-6bd4145ad629 00:44:08.074 09:13:09 -- common/autotest_common.sh@1365 -- # local bdev_info 00:44:08.074 09:13:09 -- common/autotest_common.sh@1366 -- # local bs 00:44:08.074 09:13:09 -- common/autotest_common.sh@1367 -- # local nb 00:44:08.074 09:13:09 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1e9fb707-9173-4004-9d5e-6bd4145ad629 00:44:08.331 09:13:10 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:44:08.331 { 00:44:08.331 "name": "1e9fb707-9173-4004-9d5e-6bd4145ad629", 00:44:08.331 "aliases": [ 00:44:08.331 "lvs/nvme0n1p0" 00:44:08.331 ], 00:44:08.331 "product_name": "Logical Volume", 00:44:08.331 "block_size": 4096, 00:44:08.331 "num_blocks": 26476544, 00:44:08.331 "uuid": "1e9fb707-9173-4004-9d5e-6bd4145ad629", 00:44:08.331 "assigned_rate_limits": { 00:44:08.331 "rw_ios_per_sec": 0, 00:44:08.331 "rw_mbytes_per_sec": 0, 00:44:08.331 "r_mbytes_per_sec": 0, 00:44:08.331 "w_mbytes_per_sec": 0 00:44:08.331 }, 00:44:08.331 "claimed": false, 00:44:08.331 "zoned": false, 00:44:08.331 "supported_io_types": { 00:44:08.331 "read": true, 00:44:08.332 "write": true, 00:44:08.332 "unmap": true, 00:44:08.332 "write_zeroes": true, 00:44:08.332 "flush": false, 00:44:08.332 "reset": true, 00:44:08.332 "compare": false, 00:44:08.332 "compare_and_write": false, 00:44:08.332 "abort": false, 00:44:08.332 "nvme_admin": false, 00:44:08.332 "nvme_io": false 00:44:08.332 }, 00:44:08.332 "driver_specific": { 00:44:08.332 "lvol": { 00:44:08.332 "lvol_store_uuid": "4ab457df-a210-4b27-81eb-0cb3fc6bd50c", 00:44:08.332 "base_bdev": "nvme0n1", 00:44:08.332 "thin_provision": true, 00:44:08.332 "snapshot": false, 00:44:08.332 "clone": false, 00:44:08.332 "esnap_clone": false 00:44:08.332 } 00:44:08.332 } 00:44:08.332 } 00:44:08.332 ]' 00:44:08.332 09:13:10 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:44:08.332 09:13:10 -- common/autotest_common.sh@1369 -- # bs=4096 00:44:08.332 09:13:10 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:44:08.332 09:13:10 -- common/autotest_common.sh@1370 -- # nb=26476544 00:44:08.332 09:13:10 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:44:08.332 09:13:10 -- common/autotest_common.sh@1374 -- # echo 103424 00:44:08.332 09:13:10 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:44:08.332 09:13:10 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1e9fb707-9173-4004-9d5e-6bd4145ad629 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:44:08.590 [2024-04-18 09:13:10.636398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.590 [2024-04-18 09:13:10.636676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:08.590 [2024-04-18 09:13:10.636789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:44:08.590 [2024-04-18 09:13:10.636886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.590 [2024-04-18 09:13:10.640690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.590 [2024-04-18 09:13:10.640867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:08.590 [2024-04-18 09:13:10.640967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.708 ms 00:44:08.590 [2024-04-18 09:13:10.641010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.590 [2024-04-18 09:13:10.641264] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:08.590 [2024-04-18 09:13:10.642641] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:08.590 [2024-04-18 09:13:10.642820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.590 [2024-04-18 09:13:10.642952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:08.590 [2024-04-18 09:13:10.643000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.570 ms 00:44:08.590 [2024-04-18 09:13:10.643081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.590 [2024-04-18 09:13:10.643262] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4d4e12f0-6a8e-4451-b5ee-b80718c55513 00:44:08.590 [2024-04-18 09:13:10.644941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.590 [2024-04-18 09:13:10.645077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:44:08.590 [2024-04-18 09:13:10.645170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:44:08.590 [2024-04-18 09:13:10.645218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.590 [2024-04-18 09:13:10.653166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.590 [2024-04-18 09:13:10.653398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:08.590 [2024-04-18 09:13:10.653515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.755 ms 00:44:08.590 [2024-04-18 09:13:10.653562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.590 [2024-04-18 09:13:10.653819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.590 [2024-04-18 09:13:10.653942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:08.590 [2024-04-18 09:13:10.654034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:44:08.590 [2024-04-18 09:13:10.654087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.590 [2024-04-18 09:13:10.654211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.590 [2024-04-18 09:13:10.654257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:08.590 [2024-04-18 09:13:10.654385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:44:08.590 [2024-04-18 09:13:10.654435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.590 [2024-04-18 09:13:10.654559] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:44:08.590 [2024-04-18 09:13:10.661445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.590 [2024-04-18 09:13:10.661581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:08.590 [2024-04-18 09:13:10.661671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.890 ms 00:44:08.591 [2024-04-18 09:13:10.661712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.591 [2024-04-18 09:13:10.661854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.591 [2024-04-18 09:13:10.661941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:08.591 [2024-04-18 09:13:10.662021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:44:08.591 [2024-04-18 09:13:10.662144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.591 [2024-04-18 09:13:10.662238] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:44:08.591 [2024-04-18 09:13:10.662444] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:44:08.591 [2024-04-18 09:13:10.662569] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:08.591 [2024-04-18 09:13:10.662729] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:44:08.591 [2024-04-18 09:13:10.662909] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:08.591 [2024-04-18 09:13:10.663052] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:08.591 [2024-04-18 09:13:10.663224] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:44:08.591 [2024-04-18 09:13:10.663304] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:08.591 [2024-04-18 09:13:10.663355] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:44:08.591 [2024-04-18 09:13:10.663451] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:44:08.591 [2024-04-18 09:13:10.663500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.591 [2024-04-18 09:13:10.663581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:08.591 [2024-04-18 09:13:10.663626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.270 ms 00:44:08.591 [2024-04-18 09:13:10.663702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.591 [2024-04-18 09:13:10.663848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.591 [2024-04-18 09:13:10.663913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:08.591 [2024-04-18 09:13:10.664011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:44:08.591 [2024-04-18 09:13:10.664052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.591 [2024-04-18 09:13:10.664232] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:08.591 [2024-04-18 09:13:10.664323] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:08.591 [2024-04-18 09:13:10.664382] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:08.591 [2024-04-18 09:13:10.664468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:08.591 [2024-04-18 09:13:10.664515] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:08.591 [2024-04-18 09:13:10.664550] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:08.591 [2024-04-18 09:13:10.664615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:44:08.591 [2024-04-18 09:13:10.664650] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:08.591 [2024-04-18 09:13:10.664686] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:44:08.591 [2024-04-18 09:13:10.664737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:08.591 [2024-04-18 09:13:10.664821] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:08.591 [2024-04-18 09:13:10.664861] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:44:08.591 [2024-04-18 09:13:10.664900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:08.591 [2024-04-18 09:13:10.664956] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:08.591 [2024-04-18 09:13:10.664993] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:44:08.591 [2024-04-18 09:13:10.665062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:08.591 [2024-04-18 09:13:10.665150] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:08.591 [2024-04-18 09:13:10.665191] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:44:08.591 [2024-04-18 09:13:10.665300] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:08.591 [2024-04-18 09:13:10.665339] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:44:08.591 [2024-04-18 09:13:10.665390] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:44:08.591 [2024-04-18 09:13:10.665429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:44:08.591 [2024-04-18 09:13:10.665517] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:08.591 [2024-04-18 09:13:10.665556] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:44:08.591 [2024-04-18 09:13:10.665662] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:08.591 [2024-04-18 09:13:10.665701] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:08.591 [2024-04-18 09:13:10.665738] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:44:08.591 [2024-04-18 09:13:10.665822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:08.591 [2024-04-18 09:13:10.665865] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:08.591 [2024-04-18 09:13:10.665899] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:44:08.591 [2024-04-18 09:13:10.665951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:08.591 [2024-04-18 09:13:10.665985] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:08.591 [2024-04-18 09:13:10.666022] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:44:08.591 [2024-04-18 09:13:10.666055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:08.591 [2024-04-18 09:13:10.666118] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:08.591 [2024-04-18 09:13:10.666152] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:44:08.591 [2024-04-18 09:13:10.666191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:08.591 [2024-04-18 09:13:10.666225] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:08.591 [2024-04-18 09:13:10.666271] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:44:08.591 [2024-04-18 09:13:10.666305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:08.591 [2024-04-18 09:13:10.666341] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:08.591 [2024-04-18 09:13:10.666386] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:08.591 [2024-04-18 09:13:10.666452] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:08.591 [2024-04-18 09:13:10.666487] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:08.591 [2024-04-18 09:13:10.666524] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:08.591 [2024-04-18 09:13:10.666559] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:08.591 [2024-04-18 09:13:10.666595] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:08.591 [2024-04-18 09:13:10.666676] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:08.591 [2024-04-18 09:13:10.666719] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:08.591 [2024-04-18 09:13:10.666753] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:08.591 [2024-04-18 09:13:10.666812] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:08.591 [2024-04-18 09:13:10.666869] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:08.591 [2024-04-18 09:13:10.666930] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:44:08.591 [2024-04-18 09:13:10.667031] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:44:08.591 [2024-04-18 09:13:10.667090] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:44:08.591 [2024-04-18 09:13:10.667169] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:44:08.591 [2024-04-18 09:13:10.667231] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:44:08.591 [2024-04-18 09:13:10.667286] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:44:08.591 [2024-04-18 09:13:10.667379] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:44:08.591 [2024-04-18 09:13:10.667461] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:44:08.591 [2024-04-18 09:13:10.667576] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:44:08.591 [2024-04-18 09:13:10.667681] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:44:08.591 [2024-04-18 09:13:10.667745] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:44:08.591 [2024-04-18 09:13:10.667845] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:44:08.591 [2024-04-18 09:13:10.667918] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:44:08.591 [2024-04-18 09:13:10.667994] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:08.591 [2024-04-18 09:13:10.668095] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:08.591 [2024-04-18 09:13:10.668259] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:08.591 [2024-04-18 09:13:10.668357] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:08.591 [2024-04-18 09:13:10.668451] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:08.591 [2024-04-18 09:13:10.668579] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:08.591 [2024-04-18 09:13:10.668683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.591 [2024-04-18 09:13:10.668808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:08.591 [2024-04-18 09:13:10.668851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.497 ms 00:44:08.591 [2024-04-18 09:13:10.668943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.850 [2024-04-18 09:13:10.696918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.850 [2024-04-18 09:13:10.697138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:08.850 [2024-04-18 09:13:10.697278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.815 ms 00:44:08.850 [2024-04-18 09:13:10.697425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.850 [2024-04-18 09:13:10.697690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.851 [2024-04-18 09:13:10.697792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:08.851 [2024-04-18 09:13:10.697874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:44:08.851 [2024-04-18 09:13:10.697980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.851 [2024-04-18 09:13:10.760749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.851 [2024-04-18 09:13:10.761033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:08.851 [2024-04-18 09:13:10.761130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.688 ms 00:44:08.851 [2024-04-18 09:13:10.761177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.851 [2024-04-18 09:13:10.761390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.851 [2024-04-18 09:13:10.761506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:08.851 [2024-04-18 09:13:10.761617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:44:08.851 [2024-04-18 09:13:10.761667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.851 [2024-04-18 09:13:10.762240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.851 [2024-04-18 09:13:10.762363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:08.851 [2024-04-18 09:13:10.762468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:44:08.851 [2024-04-18 09:13:10.762514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.851 [2024-04-18 09:13:10.762708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.851 [2024-04-18 09:13:10.762811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:08.851 [2024-04-18 09:13:10.762893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:44:08.851 [2024-04-18 09:13:10.762943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.851 [2024-04-18 09:13:10.800643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.851 [2024-04-18 09:13:10.800832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:08.851 [2024-04-18 09:13:10.800939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.589 ms 00:44:08.851 [2024-04-18 09:13:10.800986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.851 [2024-04-18 09:13:10.818417] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:44:08.851 [2024-04-18 09:13:10.837100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.851 [2024-04-18 09:13:10.837349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:08.851 [2024-04-18 09:13:10.837537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.821 ms 00:44:08.851 [2024-04-18 09:13:10.837653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.851 [2024-04-18 09:13:10.917879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.851 [2024-04-18 09:13:10.918124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:44:08.851 [2024-04-18 09:13:10.918242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.062 ms 00:44:08.851 [2024-04-18 09:13:10.918290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.851 [2024-04-18 09:13:10.918450] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:44:08.851 [2024-04-18 09:13:10.918634] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:44:11.380 [2024-04-18 09:13:13.262065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:11.380 [2024-04-18 09:13:13.262317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:44:11.380 [2024-04-18 09:13:13.262453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2343.590 ms 00:44:11.380 [2024-04-18 09:13:13.262556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:11.380 [2024-04-18 09:13:13.262926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:11.380 [2024-04-18 09:13:13.263050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:11.380 [2024-04-18 09:13:13.263153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:44:11.380 [2024-04-18 09:13:13.263202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:11.380 [2024-04-18 09:13:13.310816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:11.380 [2024-04-18 09:13:13.311030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:44:11.380 [2024-04-18 09:13:13.311148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.445 ms 00:44:11.380 [2024-04-18 09:13:13.311264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:11.380 [2024-04-18 09:13:13.358812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:11.380 [2024-04-18 09:13:13.358948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:44:11.380 [2024-04-18 09:13:13.359043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.358 ms 00:44:11.380 [2024-04-18 09:13:13.359081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:11.380 [2024-04-18 09:13:13.359734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:11.380 [2024-04-18 09:13:13.359900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:11.380 [2024-04-18 09:13:13.360016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:44:11.380 [2024-04-18 09:13:13.360059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:11.380 [2024-04-18 09:13:13.475773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:11.380 [2024-04-18 09:13:13.476040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:44:11.380 [2024-04-18 09:13:13.476167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.576 ms 00:44:11.380 [2024-04-18 09:13:13.476211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:11.646 [2024-04-18 09:13:13.524558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:11.646 [2024-04-18 09:13:13.524808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:44:11.646 [2024-04-18 09:13:13.524931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.151 ms 00:44:11.646 [2024-04-18 09:13:13.524976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:11.646 [2024-04-18 09:13:13.530559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:11.646 [2024-04-18 09:13:13.530731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:44:11.646 [2024-04-18 09:13:13.530824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.371 ms 00:44:11.646 [2024-04-18 09:13:13.530866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:11.646 [2024-04-18 09:13:13.577966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:11.646 [2024-04-18 09:13:13.578262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:11.646 [2024-04-18 09:13:13.578369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.989 ms 00:44:11.646 [2024-04-18 09:13:13.578436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:11.646 [2024-04-18 09:13:13.578626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:11.646 [2024-04-18 09:13:13.578678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:11.646 [2024-04-18 09:13:13.578721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:44:11.646 [2024-04-18 09:13:13.578806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:11.646 [2024-04-18 09:13:13.578955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:11.646 [2024-04-18 09:13:13.578999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:11.646 [2024-04-18 09:13:13.579083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:44:11.646 [2024-04-18 09:13:13.579124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:11.646 [2024-04-18 09:13:13.580324] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:11.646 [2024-04-18 09:13:13.587627] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2943.577 ms, result 0 00:44:11.646 [2024-04-18 09:13:13.588792] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:11.646 { 00:44:11.646 "name": "ftl0", 00:44:11.646 "uuid": "4d4e12f0-6a8e-4451-b5ee-b80718c55513" 00:44:11.646 } 00:44:11.646 09:13:13 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:44:11.646 09:13:13 -- common/autotest_common.sh@885 -- # local bdev_name=ftl0 00:44:11.646 09:13:13 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:44:11.646 09:13:13 -- common/autotest_common.sh@887 -- # local i 00:44:11.646 09:13:13 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:44:11.646 09:13:13 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:44:11.646 09:13:13 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:44:11.904 09:13:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:44:12.163 [ 00:44:12.163 { 00:44:12.163 "name": "ftl0", 00:44:12.163 "aliases": [ 00:44:12.163 "4d4e12f0-6a8e-4451-b5ee-b80718c55513" 00:44:12.163 ], 00:44:12.163 "product_name": "FTL disk", 00:44:12.163 "block_size": 4096, 00:44:12.163 "num_blocks": 23592960, 00:44:12.163 "uuid": "4d4e12f0-6a8e-4451-b5ee-b80718c55513", 00:44:12.163 "assigned_rate_limits": { 00:44:12.163 "rw_ios_per_sec": 0, 00:44:12.163 "rw_mbytes_per_sec": 0, 00:44:12.163 "r_mbytes_per_sec": 0, 00:44:12.163 "w_mbytes_per_sec": 0 00:44:12.163 }, 00:44:12.163 "claimed": false, 00:44:12.163 "zoned": false, 00:44:12.163 "supported_io_types": { 00:44:12.163 "read": true, 00:44:12.163 "write": true, 00:44:12.163 "unmap": true, 00:44:12.163 "write_zeroes": true, 00:44:12.163 "flush": true, 00:44:12.163 "reset": false, 00:44:12.163 "compare": false, 00:44:12.163 "compare_and_write": false, 00:44:12.163 "abort": false, 00:44:12.163 "nvme_admin": false, 00:44:12.163 "nvme_io": false 00:44:12.163 }, 00:44:12.163 "driver_specific": { 00:44:12.163 "ftl": { 00:44:12.163 "base_bdev": "1e9fb707-9173-4004-9d5e-6bd4145ad629", 00:44:12.163 "cache": "nvc0n1p0" 00:44:12.163 } 00:44:12.163 } 00:44:12.163 } 00:44:12.163 ] 00:44:12.163 09:13:14 -- common/autotest_common.sh@893 -- # return 0 00:44:12.163 09:13:14 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:44:12.163 09:13:14 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:44:12.422 09:13:14 -- ftl/trim.sh@56 -- # echo ']}' 00:44:12.422 09:13:14 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:44:12.681 09:13:14 -- ftl/trim.sh@59 -- # bdev_info='[ 00:44:12.681 { 00:44:12.681 "name": "ftl0", 00:44:12.681 "aliases": [ 00:44:12.681 "4d4e12f0-6a8e-4451-b5ee-b80718c55513" 00:44:12.681 ], 00:44:12.681 "product_name": "FTL disk", 00:44:12.681 "block_size": 4096, 00:44:12.681 "num_blocks": 23592960, 00:44:12.681 "uuid": "4d4e12f0-6a8e-4451-b5ee-b80718c55513", 00:44:12.681 "assigned_rate_limits": { 00:44:12.681 "rw_ios_per_sec": 0, 00:44:12.681 "rw_mbytes_per_sec": 0, 00:44:12.681 "r_mbytes_per_sec": 0, 00:44:12.681 "w_mbytes_per_sec": 0 00:44:12.681 }, 00:44:12.681 "claimed": false, 00:44:12.681 "zoned": false, 00:44:12.681 "supported_io_types": { 00:44:12.681 "read": true, 00:44:12.681 "write": true, 00:44:12.681 "unmap": true, 00:44:12.681 "write_zeroes": true, 00:44:12.681 "flush": true, 00:44:12.681 "reset": false, 00:44:12.681 "compare": false, 00:44:12.681 "compare_and_write": false, 00:44:12.681 "abort": false, 00:44:12.681 "nvme_admin": false, 00:44:12.681 "nvme_io": false 00:44:12.681 }, 00:44:12.681 "driver_specific": { 00:44:12.681 "ftl": { 00:44:12.681 "base_bdev": "1e9fb707-9173-4004-9d5e-6bd4145ad629", 00:44:12.681 "cache": "nvc0n1p0" 00:44:12.681 } 00:44:12.681 } 00:44:12.681 } 00:44:12.681 ]' 00:44:12.681 09:13:14 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:44:12.939 09:13:14 -- ftl/trim.sh@60 -- # nb=23592960 00:44:12.939 09:13:14 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:44:12.939 [2024-04-18 09:13:15.011537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:12.939 [2024-04-18 09:13:15.011802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:12.939 [2024-04-18 09:13:15.011937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:44:12.939 [2024-04-18 09:13:15.011998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:12.939 [2024-04-18 09:13:15.012088] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:44:12.939 [2024-04-18 09:13:15.016430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:12.939 [2024-04-18 09:13:15.016631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:12.939 [2024-04-18 09:13:15.016741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.192 ms 00:44:12.939 [2024-04-18 09:13:15.016788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:12.939 [2024-04-18 09:13:15.017453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:12.939 [2024-04-18 09:13:15.017588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:12.939 [2024-04-18 09:13:15.017697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:44:12.939 [2024-04-18 09:13:15.017742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:12.939 [2024-04-18 09:13:15.021368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:12.939 [2024-04-18 09:13:15.021510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:12.939 [2024-04-18 09:13:15.021621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.482 ms 00:44:12.939 [2024-04-18 09:13:15.021664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:12.939 [2024-04-18 09:13:15.028506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:12.939 [2024-04-18 09:13:15.028675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:44:12.939 [2024-04-18 09:13:15.028774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.631 ms 00:44:12.939 [2024-04-18 09:13:15.028818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.198 [2024-04-18 09:13:15.073559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:13.198 [2024-04-18 09:13:15.073788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:13.198 [2024-04-18 09:13:15.073881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.564 ms 00:44:13.198 [2024-04-18 09:13:15.073923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.198 [2024-04-18 09:13:15.101221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:13.198 [2024-04-18 09:13:15.101482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:13.198 [2024-04-18 09:13:15.101589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.129 ms 00:44:13.198 [2024-04-18 09:13:15.101633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.199 [2024-04-18 09:13:15.101963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:13.199 [2024-04-18 09:13:15.102034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:13.199 [2024-04-18 09:13:15.102078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:44:13.199 [2024-04-18 09:13:15.102162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.199 [2024-04-18 09:13:15.147721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:13.199 [2024-04-18 09:13:15.147959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:44:13.199 [2024-04-18 09:13:15.148055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.413 ms 00:44:13.199 [2024-04-18 09:13:15.148094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.199 [2024-04-18 09:13:15.191251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:13.199 [2024-04-18 09:13:15.191492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:44:13.199 [2024-04-18 09:13:15.191578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.977 ms 00:44:13.199 [2024-04-18 09:13:15.191615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.199 [2024-04-18 09:13:15.237929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:13.199 [2024-04-18 09:13:15.238187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:13.199 [2024-04-18 09:13:15.238350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.132 ms 00:44:13.199 [2024-04-18 09:13:15.238414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.199 [2024-04-18 09:13:15.284880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:13.199 [2024-04-18 09:13:15.285134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:13.199 [2024-04-18 09:13:15.285288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.227 ms 00:44:13.199 [2024-04-18 09:13:15.285331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.199 [2024-04-18 09:13:15.285503] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:13.199 [2024-04-18 09:13:15.285659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.285729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.285790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.285858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.285932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.286073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.286140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.286200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.286311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.286398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.286463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.286603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.286665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.286775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.286887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.286957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.287058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.287193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.287265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.287324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.287392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.287451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.287557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.287621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.287677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.287734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.287846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.287934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.287994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.288137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.288304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.288368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.288443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.288509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.288695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.288775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.288838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.288905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.289038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.289104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.289160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.289222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.289323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.289401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.289479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.289602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.289716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.289796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.289897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.290017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.290113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.290230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.290328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.290415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.290578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.290638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.290691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.290749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.290890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.290979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.291036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.291094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.291146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.291261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.291323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.291394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.291500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.291560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.291616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.291721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.291781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.291840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.291991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:13.199 [2024-04-18 09:13:15.292061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.292172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.292240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.292340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.292430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.292493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.292664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.292768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.292886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.292952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.293056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.293165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.293264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.293412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.293486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.293547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.293611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.293727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.293807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.293865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.293983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.294040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.294139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.294198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.294362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.294427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.294483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:13.200 [2024-04-18 09:13:15.294548] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:13.200 [2024-04-18 09:13:15.294589] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4d4e12f0-6a8e-4451-b5ee-b80718c55513 00:44:13.200 [2024-04-18 09:13:15.294808] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:13.200 [2024-04-18 09:13:15.294864] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:13.200 [2024-04-18 09:13:15.294899] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:13.200 [2024-04-18 09:13:15.294938] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:13.200 [2024-04-18 09:13:15.294972] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:13.200 [2024-04-18 09:13:15.295079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:13.200 [2024-04-18 09:13:15.295148] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:13.200 [2024-04-18 09:13:15.295186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:13.200 [2024-04-18 09:13:15.295220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:13.200 [2024-04-18 09:13:15.295257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:13.200 [2024-04-18 09:13:15.295293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:13.200 [2024-04-18 09:13:15.295335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.757 ms 00:44:13.200 [2024-04-18 09:13:15.295468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.459 [2024-04-18 09:13:15.318515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:13.459 [2024-04-18 09:13:15.318744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:13.459 [2024-04-18 09:13:15.318906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.905 ms 00:44:13.459 [2024-04-18 09:13:15.318954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.459 [2024-04-18 09:13:15.319416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:13.459 [2024-04-18 09:13:15.319532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:13.459 [2024-04-18 09:13:15.319641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:44:13.459 [2024-04-18 09:13:15.319682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.459 [2024-04-18 09:13:15.395114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:13.459 [2024-04-18 09:13:15.395348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:13.459 [2024-04-18 09:13:15.395529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:13.459 [2024-04-18 09:13:15.395573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.459 [2024-04-18 09:13:15.395753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:13.459 [2024-04-18 09:13:15.395792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:13.459 [2024-04-18 09:13:15.395906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:13.459 [2024-04-18 09:13:15.395966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.459 [2024-04-18 09:13:15.396085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:13.459 [2024-04-18 09:13:15.396140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:13.459 [2024-04-18 09:13:15.396200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:13.459 [2024-04-18 09:13:15.396234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.459 [2024-04-18 09:13:15.396304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:13.459 [2024-04-18 09:13:15.396340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:13.459 [2024-04-18 09:13:15.396389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:13.459 [2024-04-18 09:13:15.396428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.459 [2024-04-18 09:13:15.545757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:13.459 [2024-04-18 09:13:15.546015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:13.459 [2024-04-18 09:13:15.546194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:13.459 [2024-04-18 09:13:15.546239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.718 [2024-04-18 09:13:15.599834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:13.718 [2024-04-18 09:13:15.600070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:13.718 [2024-04-18 09:13:15.600229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:13.718 [2024-04-18 09:13:15.600273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.718 [2024-04-18 09:13:15.600481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:13.718 [2024-04-18 09:13:15.600541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:13.718 [2024-04-18 09:13:15.600706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:13.718 [2024-04-18 09:13:15.600858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.718 [2024-04-18 09:13:15.600968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:13.718 [2024-04-18 09:13:15.601047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:13.718 [2024-04-18 09:13:15.601156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:13.718 [2024-04-18 09:13:15.601212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.718 [2024-04-18 09:13:15.601460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:13.718 [2024-04-18 09:13:15.601533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:13.718 [2024-04-18 09:13:15.601596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:13.718 [2024-04-18 09:13:15.601697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.718 [2024-04-18 09:13:15.601826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:13.718 [2024-04-18 09:13:15.601885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:13.718 [2024-04-18 09:13:15.601926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:13.718 [2024-04-18 09:13:15.602024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.718 [2024-04-18 09:13:15.602128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:13.718 [2024-04-18 09:13:15.602290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:13.718 [2024-04-18 09:13:15.602340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:13.718 [2024-04-18 09:13:15.602388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.718 [2024-04-18 09:13:15.602549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:13.718 [2024-04-18 09:13:15.602596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:13.718 [2024-04-18 09:13:15.602636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:13.718 [2024-04-18 09:13:15.602671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:13.718 [2024-04-18 09:13:15.602963] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 591.390 ms, result 0 00:44:13.718 true 00:44:13.718 09:13:15 -- ftl/trim.sh@63 -- # killprocess 78849 00:44:13.718 09:13:15 -- common/autotest_common.sh@936 -- # '[' -z 78849 ']' 00:44:13.718 09:13:15 -- common/autotest_common.sh@940 -- # kill -0 78849 00:44:13.718 09:13:15 -- common/autotest_common.sh@941 -- # uname 00:44:13.718 09:13:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:44:13.718 09:13:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78849 00:44:13.718 killing process with pid 78849 00:44:13.718 09:13:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:44:13.718 09:13:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:44:13.718 09:13:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78849' 00:44:13.718 09:13:15 -- common/autotest_common.sh@955 -- # kill 78849 00:44:13.718 09:13:15 -- common/autotest_common.sh@960 -- # wait 78849 00:44:20.278 09:13:21 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:44:20.537 65536+0 records in 00:44:20.537 65536+0 records out 00:44:20.537 268435456 bytes (268 MB, 256 MiB) copied, 1.16742 s, 230 MB/s 00:44:20.537 09:13:22 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:20.537 [2024-04-18 09:13:22.605049] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:44:20.537 [2024-04-18 09:13:22.605493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79059 ] 00:44:20.797 [2024-04-18 09:13:22.795309] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:21.056 [2024-04-18 09:13:23.116074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:21.623 [2024-04-18 09:13:23.547795] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:21.623 [2024-04-18 09:13:23.548103] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:21.623 [2024-04-18 09:13:23.712283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.623 [2024-04-18 09:13:23.712565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:21.623 [2024-04-18 09:13:23.712675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:44:21.623 [2024-04-18 09:13:23.712728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.623 [2024-04-18 09:13:23.716504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.623 [2024-04-18 09:13:23.716680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:21.623 [2024-04-18 09:13:23.716781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.701 ms 00:44:21.623 [2024-04-18 09:13:23.716824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.623 [2024-04-18 09:13:23.717138] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:21.623 [2024-04-18 09:13:23.718491] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:21.623 [2024-04-18 09:13:23.718657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.623 [2024-04-18 09:13:23.718752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:21.623 [2024-04-18 09:13:23.718860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.527 ms 00:44:21.623 [2024-04-18 09:13:23.718903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.623 [2024-04-18 09:13:23.720637] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:44:21.882 [2024-04-18 09:13:23.744664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.882 [2024-04-18 09:13:23.744889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:44:21.882 [2024-04-18 09:13:23.744990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.027 ms 00:44:21.882 [2024-04-18 09:13:23.745034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.882 [2024-04-18 09:13:23.745190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.882 [2024-04-18 09:13:23.745241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:44:21.882 [2024-04-18 09:13:23.745278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:44:21.882 [2024-04-18 09:13:23.745365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.882 [2024-04-18 09:13:23.752881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.882 [2024-04-18 09:13:23.753070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:21.882 [2024-04-18 09:13:23.753159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.399 ms 00:44:21.882 [2024-04-18 09:13:23.753200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.882 [2024-04-18 09:13:23.753458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.882 [2024-04-18 09:13:23.753571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:21.882 [2024-04-18 09:13:23.753667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:44:21.882 [2024-04-18 09:13:23.753707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.882 [2024-04-18 09:13:23.753810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.882 [2024-04-18 09:13:23.753905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:21.882 [2024-04-18 09:13:23.753991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:44:21.882 [2024-04-18 09:13:23.754031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.882 [2024-04-18 09:13:23.754129] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:44:21.882 [2024-04-18 09:13:23.761076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.882 [2024-04-18 09:13:23.761225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:21.882 [2024-04-18 09:13:23.761354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.956 ms 00:44:21.882 [2024-04-18 09:13:23.761414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.882 [2024-04-18 09:13:23.761533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.882 [2024-04-18 09:13:23.761582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:21.882 [2024-04-18 09:13:23.761617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:44:21.883 [2024-04-18 09:13:23.761693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.883 [2024-04-18 09:13:23.761756] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:44:21.883 [2024-04-18 09:13:23.761811] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:44:21.883 [2024-04-18 09:13:23.761952] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:44:21.883 [2024-04-18 09:13:23.762016] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:44:21.883 [2024-04-18 09:13:23.762202] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:44:21.883 [2024-04-18 09:13:23.762269] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:21.883 [2024-04-18 09:13:23.762418] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:44:21.883 [2024-04-18 09:13:23.762534] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:21.883 [2024-04-18 09:13:23.762593] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:21.883 [2024-04-18 09:13:23.762693] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:44:21.883 [2024-04-18 09:13:23.762732] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:21.883 [2024-04-18 09:13:23.762766] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:44:21.883 [2024-04-18 09:13:23.762833] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:44:21.883 [2024-04-18 09:13:23.762873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.883 [2024-04-18 09:13:23.762965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:21.883 [2024-04-18 09:13:23.763031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.119 ms 00:44:21.883 [2024-04-18 09:13:23.763069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.883 [2024-04-18 09:13:23.763172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.883 [2024-04-18 09:13:23.763211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:21.883 [2024-04-18 09:13:23.763289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:44:21.883 [2024-04-18 09:13:23.763350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.883 [2024-04-18 09:13:23.763476] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:21.883 [2024-04-18 09:13:23.763518] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:21.883 [2024-04-18 09:13:23.763554] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:21.883 [2024-04-18 09:13:23.763651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:21.883 [2024-04-18 09:13:23.763693] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:21.883 [2024-04-18 09:13:23.763727] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:21.883 [2024-04-18 09:13:23.763761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:44:21.883 [2024-04-18 09:13:23.763837] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:21.883 [2024-04-18 09:13:23.763876] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:44:21.883 [2024-04-18 09:13:23.763927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:21.883 [2024-04-18 09:13:23.763993] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:21.883 [2024-04-18 09:13:23.764096] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:44:21.883 [2024-04-18 09:13:23.764130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:21.883 [2024-04-18 09:13:23.764213] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:21.883 [2024-04-18 09:13:23.764254] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:44:21.883 [2024-04-18 09:13:23.764289] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:21.883 [2024-04-18 09:13:23.764325] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:21.883 [2024-04-18 09:13:23.764454] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:44:21.883 [2024-04-18 09:13:23.764503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:21.883 [2024-04-18 09:13:23.764539] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:44:21.883 [2024-04-18 09:13:23.764574] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:44:21.883 [2024-04-18 09:13:23.764609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:44:21.883 [2024-04-18 09:13:23.764644] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:21.883 [2024-04-18 09:13:23.764678] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:44:21.883 [2024-04-18 09:13:23.764793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:21.883 [2024-04-18 09:13:23.764860] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:21.883 [2024-04-18 09:13:23.764895] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:44:21.883 [2024-04-18 09:13:23.764930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:21.883 [2024-04-18 09:13:23.764975] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:21.883 [2024-04-18 09:13:23.765008] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:44:21.883 [2024-04-18 09:13:23.765041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:21.883 [2024-04-18 09:13:23.765146] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:21.883 [2024-04-18 09:13:23.765210] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:44:21.883 [2024-04-18 09:13:23.765243] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:21.883 [2024-04-18 09:13:23.765276] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:21.883 [2024-04-18 09:13:23.765310] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:44:21.883 [2024-04-18 09:13:23.765343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:21.883 [2024-04-18 09:13:23.765376] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:21.883 [2024-04-18 09:13:23.765482] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:44:21.883 [2024-04-18 09:13:23.765637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:21.883 [2024-04-18 09:13:23.765678] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:21.883 [2024-04-18 09:13:23.765712] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:21.883 [2024-04-18 09:13:23.765746] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:21.883 [2024-04-18 09:13:23.765781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:21.883 [2024-04-18 09:13:23.765868] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:21.883 [2024-04-18 09:13:23.765908] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:21.883 [2024-04-18 09:13:23.765941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:21.883 [2024-04-18 09:13:23.765975] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:21.883 [2024-04-18 09:13:23.766008] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:21.883 [2024-04-18 09:13:23.766089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:21.883 [2024-04-18 09:13:23.766129] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:21.883 [2024-04-18 09:13:23.766187] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:21.883 [2024-04-18 09:13:23.766242] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:44:21.883 [2024-04-18 09:13:23.766363] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:44:21.883 [2024-04-18 09:13:23.766433] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:44:21.883 [2024-04-18 09:13:23.766531] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:44:21.883 [2024-04-18 09:13:23.766589] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:44:21.883 [2024-04-18 09:13:23.766734] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:44:21.883 [2024-04-18 09:13:23.766788] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:44:21.883 [2024-04-18 09:13:23.766842] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:44:21.883 [2024-04-18 09:13:23.766895] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:44:21.883 [2024-04-18 09:13:23.767003] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:44:21.883 [2024-04-18 09:13:23.767063] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:44:21.883 [2024-04-18 09:13:23.767118] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:44:21.883 [2024-04-18 09:13:23.767172] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:44:21.883 [2024-04-18 09:13:23.767311] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:21.883 [2024-04-18 09:13:23.767366] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:21.883 [2024-04-18 09:13:23.767459] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:21.883 [2024-04-18 09:13:23.767566] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:21.883 [2024-04-18 09:13:23.767623] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:21.883 [2024-04-18 09:13:23.767678] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:21.883 [2024-04-18 09:13:23.767735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.883 [2024-04-18 09:13:23.767851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:21.883 [2024-04-18 09:13:23.767907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.300 ms 00:44:21.883 [2024-04-18 09:13:23.767942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.884 [2024-04-18 09:13:23.795751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.884 [2024-04-18 09:13:23.795997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:21.884 [2024-04-18 09:13:23.796118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.702 ms 00:44:21.884 [2024-04-18 09:13:23.796162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.884 [2024-04-18 09:13:23.796446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.884 [2024-04-18 09:13:23.796552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:21.884 [2024-04-18 09:13:23.796630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:44:21.884 [2024-04-18 09:13:23.796701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.884 [2024-04-18 09:13:23.865622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.884 [2024-04-18 09:13:23.865889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:21.884 [2024-04-18 09:13:23.866045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.852 ms 00:44:21.884 [2024-04-18 09:13:23.866089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.884 [2024-04-18 09:13:23.866229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.884 [2024-04-18 09:13:23.866287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:21.884 [2024-04-18 09:13:23.866324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:21.884 [2024-04-18 09:13:23.866429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.884 [2024-04-18 09:13:23.866946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.884 [2024-04-18 09:13:23.867066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:21.884 [2024-04-18 09:13:23.867153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:44:21.884 [2024-04-18 09:13:23.867235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.884 [2024-04-18 09:13:23.867429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.884 [2024-04-18 09:13:23.867485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:21.884 [2024-04-18 09:13:23.867570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:44:21.884 [2024-04-18 09:13:23.867612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.884 [2024-04-18 09:13:23.895380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.884 [2024-04-18 09:13:23.895608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:21.884 [2024-04-18 09:13:23.895739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.700 ms 00:44:21.884 [2024-04-18 09:13:23.895781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.884 [2024-04-18 09:13:23.919299] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:44:21.884 [2024-04-18 09:13:23.919556] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:44:21.884 [2024-04-18 09:13:23.919738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.884 [2024-04-18 09:13:23.919779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:44:21.884 [2024-04-18 09:13:23.919873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.702 ms 00:44:21.884 [2024-04-18 09:13:23.919929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.884 [2024-04-18 09:13:23.957688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.884 [2024-04-18 09:13:23.957965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:44:21.884 [2024-04-18 09:13:23.958088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.608 ms 00:44:21.884 [2024-04-18 09:13:23.958128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:21.884 [2024-04-18 09:13:23.981939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:21.884 [2024-04-18 09:13:23.982122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:44:21.884 [2024-04-18 09:13:23.982226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.633 ms 00:44:21.884 [2024-04-18 09:13:23.982266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:22.143 [2024-04-18 09:13:24.005601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:22.143 [2024-04-18 09:13:24.005790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:44:22.143 [2024-04-18 09:13:24.005884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.188 ms 00:44:22.143 [2024-04-18 09:13:24.005924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:22.143 [2024-04-18 09:13:24.006594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:22.143 [2024-04-18 09:13:24.006739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:22.143 [2024-04-18 09:13:24.006821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:44:22.143 [2024-04-18 09:13:24.006861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:22.143 [2024-04-18 09:13:24.114641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:22.143 [2024-04-18 09:13:24.114879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:44:22.143 [2024-04-18 09:13:24.114968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.716 ms 00:44:22.143 [2024-04-18 09:13:24.115007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:22.143 [2024-04-18 09:13:24.130615] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:44:22.143 [2024-04-18 09:13:24.149766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:22.143 [2024-04-18 09:13:24.150039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:22.143 [2024-04-18 09:13:24.150155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.599 ms 00:44:22.143 [2024-04-18 09:13:24.150194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:22.143 [2024-04-18 09:13:24.150408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:22.143 [2024-04-18 09:13:24.150512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:44:22.143 [2024-04-18 09:13:24.150590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:44:22.143 [2024-04-18 09:13:24.150629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:22.143 [2024-04-18 09:13:24.150720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:22.143 [2024-04-18 09:13:24.150758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:22.143 [2024-04-18 09:13:24.150832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:44:22.143 [2024-04-18 09:13:24.150875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:22.143 [2024-04-18 09:13:24.153337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:22.143 [2024-04-18 09:13:24.153480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:44:22.143 [2024-04-18 09:13:24.153561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.411 ms 00:44:22.143 [2024-04-18 09:13:24.153602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:22.143 [2024-04-18 09:13:24.153706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:22.143 [2024-04-18 09:13:24.153750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:22.143 [2024-04-18 09:13:24.153786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:22.143 [2024-04-18 09:13:24.153862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:22.143 [2024-04-18 09:13:24.153951] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:44:22.143 [2024-04-18 09:13:24.153992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:22.143 [2024-04-18 09:13:24.154066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:44:22.143 [2024-04-18 09:13:24.154099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:44:22.143 [2024-04-18 09:13:24.154170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:22.143 [2024-04-18 09:13:24.198279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:22.143 [2024-04-18 09:13:24.198500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:22.143 [2024-04-18 09:13:24.198586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.047 ms 00:44:22.143 [2024-04-18 09:13:24.198635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:22.143 [2024-04-18 09:13:24.198815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:22.143 [2024-04-18 09:13:24.198865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:22.143 [2024-04-18 09:13:24.198948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:44:22.143 [2024-04-18 09:13:24.198988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:22.143 [2024-04-18 09:13:24.200190] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:22.143 [2024-04-18 09:13:24.206980] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 487.554 ms, result 0 00:44:22.143 [2024-04-18 09:13:24.207868] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:22.143 [2024-04-18 09:13:24.229387] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:30.535  Copying: 32/256 [MB] (32 MBps) Copying: 66/256 [MB] (34 MBps) Copying: 97/256 [MB] (30 MBps) Copying: 129/256 [MB] (32 MBps) Copying: 162/256 [MB] (32 MBps) Copying: 193/256 [MB] (31 MBps) Copying: 224/256 [MB] (31 MBps) Copying: 256/256 [MB] (average 32 MBps)[2024-04-18 09:13:32.205996] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:30.535 [2024-04-18 09:13:32.224019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.535 [2024-04-18 09:13:32.224273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:30.535 [2024-04-18 09:13:32.224399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:44:30.535 [2024-04-18 09:13:32.224509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.535 [2024-04-18 09:13:32.224591] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:44:30.535 [2024-04-18 09:13:32.229086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.535 [2024-04-18 09:13:32.229291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:30.535 [2024-04-18 09:13:32.229406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.369 ms 00:44:30.535 [2024-04-18 09:13:32.229543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.535 [2024-04-18 09:13:32.231244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.535 [2024-04-18 09:13:32.231422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:30.535 [2024-04-18 09:13:32.231519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.626 ms 00:44:30.535 [2024-04-18 09:13:32.231608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.536 [2024-04-18 09:13:32.239249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.536 [2024-04-18 09:13:32.239464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:30.536 [2024-04-18 09:13:32.239570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.578 ms 00:44:30.536 [2024-04-18 09:13:32.239612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.536 [2024-04-18 09:13:32.246442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.536 [2024-04-18 09:13:32.246646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:44:30.536 [2024-04-18 09:13:32.246761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.710 ms 00:44:30.536 [2024-04-18 09:13:32.246803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.536 [2024-04-18 09:13:32.294796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.536 [2024-04-18 09:13:32.295297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:30.536 [2024-04-18 09:13:32.295445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.843 ms 00:44:30.536 [2024-04-18 09:13:32.295492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.536 [2024-04-18 09:13:32.322330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.536 [2024-04-18 09:13:32.322593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:30.536 [2024-04-18 09:13:32.322684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.640 ms 00:44:30.536 [2024-04-18 09:13:32.322726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.536 [2024-04-18 09:13:32.322964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.536 [2024-04-18 09:13:32.323015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:30.536 [2024-04-18 09:13:32.323077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:44:30.536 [2024-04-18 09:13:32.323179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.536 [2024-04-18 09:13:32.372466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.536 [2024-04-18 09:13:32.372749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:44:30.536 [2024-04-18 09:13:32.372889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.187 ms 00:44:30.536 [2024-04-18 09:13:32.372934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.536 [2024-04-18 09:13:32.421124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.536 [2024-04-18 09:13:32.421406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:44:30.536 [2024-04-18 09:13:32.421540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.059 ms 00:44:30.536 [2024-04-18 09:13:32.421582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.536 [2024-04-18 09:13:32.468990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.536 [2024-04-18 09:13:32.469251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:30.536 [2024-04-18 09:13:32.469394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.213 ms 00:44:30.536 [2024-04-18 09:13:32.469542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.536 [2024-04-18 09:13:32.516749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.536 [2024-04-18 09:13:32.517005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:30.536 [2024-04-18 09:13:32.517173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.034 ms 00:44:30.536 [2024-04-18 09:13:32.517263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.536 [2024-04-18 09:13:32.517423] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:30.536 [2024-04-18 09:13:32.517583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.517650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.517762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.517843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.517898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.517953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.518103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.518159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.518213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.518270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.518415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.518472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.518527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.518632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.518693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.518748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.518850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.518905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.519030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.519128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.519185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.519322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.519390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.519452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.519507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.519621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.519679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.519779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.519834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.519934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.520039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.520149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.520250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.520311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.520418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.520555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.520634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.520690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.520745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.520858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.520932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.520987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.521041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.521152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.521211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.521266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.521349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.521417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.521550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.521645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.521703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.521836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.521892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.521947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.522002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.522155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.522211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.522266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.522321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.522472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.522530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.522585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.522639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.522782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.522840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.522944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.523037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.523095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.523205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.523261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.523334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.523458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.523523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.523635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.523693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.523791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.523943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.524004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.524107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.524169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.524228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.524324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.524393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.524567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.524664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.524763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.524824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.524981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.525078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.525194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.525292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.525348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.525416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.525548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.525604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.525658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.525713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.525830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.525923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.525978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:30.536 [2024-04-18 09:13:32.526042] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:30.536 [2024-04-18 09:13:32.526128] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4d4e12f0-6a8e-4451-b5ee-b80718c55513 00:44:30.536 [2024-04-18 09:13:32.526214] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:30.536 [2024-04-18 09:13:32.526248] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:30.536 [2024-04-18 09:13:32.526282] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:30.536 [2024-04-18 09:13:32.526317] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:30.536 [2024-04-18 09:13:32.526350] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:30.536 [2024-04-18 09:13:32.526448] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:30.536 [2024-04-18 09:13:32.526495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:30.536 [2024-04-18 09:13:32.526531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:30.536 [2024-04-18 09:13:32.526564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:30.536 [2024-04-18 09:13:32.526600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.536 [2024-04-18 09:13:32.526636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:30.536 [2024-04-18 09:13:32.526726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.179 ms 00:44:30.536 [2024-04-18 09:13:32.526767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.536 [2024-04-18 09:13:32.550702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.536 [2024-04-18 09:13:32.550948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:30.536 [2024-04-18 09:13:32.551061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.857 ms 00:44:30.536 [2024-04-18 09:13:32.551155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.536 [2024-04-18 09:13:32.551563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:30.536 [2024-04-18 09:13:32.551674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:30.536 [2024-04-18 09:13:32.551765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:44:30.536 [2024-04-18 09:13:32.551848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.536 [2024-04-18 09:13:32.620557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:30.536 [2024-04-18 09:13:32.620817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:30.536 [2024-04-18 09:13:32.620949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:30.536 [2024-04-18 09:13:32.621002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.536 [2024-04-18 09:13:32.621144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:30.536 [2024-04-18 09:13:32.621184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:30.536 [2024-04-18 09:13:32.621262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:30.536 [2024-04-18 09:13:32.621303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.536 [2024-04-18 09:13:32.621466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:30.536 [2024-04-18 09:13:32.621564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:30.536 [2024-04-18 09:13:32.621606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:30.536 [2024-04-18 09:13:32.621676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.536 [2024-04-18 09:13:32.621783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:30.536 [2024-04-18 09:13:32.621825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:30.536 [2024-04-18 09:13:32.621959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:30.536 [2024-04-18 09:13:32.621999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.795 [2024-04-18 09:13:32.755736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:30.795 [2024-04-18 09:13:32.756017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:30.795 [2024-04-18 09:13:32.756939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:30.795 [2024-04-18 09:13:32.757155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.795 [2024-04-18 09:13:32.813253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:30.795 [2024-04-18 09:13:32.813586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:30.795 [2024-04-18 09:13:32.813745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:30.795 [2024-04-18 09:13:32.813872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.795 [2024-04-18 09:13:32.814038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:30.795 [2024-04-18 09:13:32.814108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:30.795 [2024-04-18 09:13:32.814241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:30.795 [2024-04-18 09:13:32.814367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.795 [2024-04-18 09:13:32.814490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:30.795 [2024-04-18 09:13:32.814572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:30.795 [2024-04-18 09:13:32.814697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:30.795 [2024-04-18 09:13:32.814876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.795 [2024-04-18 09:13:32.815110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:30.795 [2024-04-18 09:13:32.815307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:30.795 [2024-04-18 09:13:32.815489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:30.795 [2024-04-18 09:13:32.815638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.795 [2024-04-18 09:13:32.815846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:30.795 [2024-04-18 09:13:32.816010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:30.796 [2024-04-18 09:13:32.816143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:30.796 [2024-04-18 09:13:32.816210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.796 [2024-04-18 09:13:32.816358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:30.796 [2024-04-18 09:13:32.816529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:30.796 [2024-04-18 09:13:32.816597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:30.796 [2024-04-18 09:13:32.816660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.796 [2024-04-18 09:13:32.816840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:30.796 [2024-04-18 09:13:32.816989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:30.796 [2024-04-18 09:13:32.817121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:30.796 [2024-04-18 09:13:32.817248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:30.796 [2024-04-18 09:13:32.817625] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 593.588 ms, result 0 00:44:32.697 00:44:32.697 00:44:32.697 09:13:34 -- ftl/trim.sh@72 -- # svcpid=79184 00:44:32.697 09:13:34 -- ftl/trim.sh@73 -- # waitforlisten 79184 00:44:32.697 09:13:34 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:44:32.697 09:13:34 -- common/autotest_common.sh@817 -- # '[' -z 79184 ']' 00:44:32.697 09:13:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:32.697 09:13:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:44:32.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:32.697 09:13:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:32.697 09:13:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:44:32.697 09:13:34 -- common/autotest_common.sh@10 -- # set +x 00:44:32.697 [2024-04-18 09:13:34.455163] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:44:32.697 [2024-04-18 09:13:34.455703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79184 ] 00:44:32.697 [2024-04-18 09:13:34.634523] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:32.955 [2024-04-18 09:13:34.902042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:33.903 09:13:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:44:33.903 09:13:35 -- common/autotest_common.sh@850 -- # return 0 00:44:33.903 09:13:35 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:44:34.179 [2024-04-18 09:13:36.199431] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:34.179 [2024-04-18 09:13:36.199718] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:34.438 [2024-04-18 09:13:36.353571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.438 [2024-04-18 09:13:36.353826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:34.438 [2024-04-18 09:13:36.353944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:44:34.438 [2024-04-18 09:13:36.353989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.438 [2024-04-18 09:13:36.357833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.438 [2024-04-18 09:13:36.358036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:34.438 [2024-04-18 09:13:36.358197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.743 ms 00:44:34.438 [2024-04-18 09:13:36.358244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.438 [2024-04-18 09:13:36.358614] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:34.438 [2024-04-18 09:13:36.360096] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:34.438 [2024-04-18 09:13:36.360264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.438 [2024-04-18 09:13:36.360368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:34.438 [2024-04-18 09:13:36.360438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.667 ms 00:44:34.438 [2024-04-18 09:13:36.360522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.438 [2024-04-18 09:13:36.362200] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:44:34.438 [2024-04-18 09:13:36.386181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.438 [2024-04-18 09:13:36.386490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:44:34.438 [2024-04-18 09:13:36.386608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.983 ms 00:44:34.438 [2024-04-18 09:13:36.386657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.438 [2024-04-18 09:13:36.386884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.438 [2024-04-18 09:13:36.387011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:44:34.438 [2024-04-18 09:13:36.387098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:44:34.438 [2024-04-18 09:13:36.387195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.438 [2024-04-18 09:13:36.394952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.438 [2024-04-18 09:13:36.395187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:34.438 [2024-04-18 09:13:36.395315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.651 ms 00:44:34.438 [2024-04-18 09:13:36.395361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.439 [2024-04-18 09:13:36.395606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.439 [2024-04-18 09:13:36.395656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:34.439 [2024-04-18 09:13:36.395762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:44:34.439 [2024-04-18 09:13:36.395806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.439 [2024-04-18 09:13:36.395867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.439 [2024-04-18 09:13:36.395949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:34.439 [2024-04-18 09:13:36.395988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:44:34.439 [2024-04-18 09:13:36.396025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.439 [2024-04-18 09:13:36.396081] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:44:34.439 [2024-04-18 09:13:36.402722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.439 [2024-04-18 09:13:36.402888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:34.439 [2024-04-18 09:13:36.403059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.646 ms 00:44:34.439 [2024-04-18 09:13:36.403136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.439 [2024-04-18 09:13:36.403304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.439 [2024-04-18 09:13:36.403385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:34.439 [2024-04-18 09:13:36.403430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:44:34.439 [2024-04-18 09:13:36.403465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.439 [2024-04-18 09:13:36.403524] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:44:34.439 [2024-04-18 09:13:36.403613] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:44:34.439 [2024-04-18 09:13:36.403699] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:44:34.439 [2024-04-18 09:13:36.403781] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:44:34.439 [2024-04-18 09:13:36.403942] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:44:34.439 [2024-04-18 09:13:36.404116] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:34.439 [2024-04-18 09:13:36.404244] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:44:34.439 [2024-04-18 09:13:36.404313] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:34.439 [2024-04-18 09:13:36.404455] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:34.439 [2024-04-18 09:13:36.404520] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:44:34.439 [2024-04-18 09:13:36.404600] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:34.439 [2024-04-18 09:13:36.404692] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:44:34.439 [2024-04-18 09:13:36.404740] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:44:34.439 [2024-04-18 09:13:36.404823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.439 [2024-04-18 09:13:36.404868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:34.439 [2024-04-18 09:13:36.404905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.309 ms 00:44:34.439 [2024-04-18 09:13:36.404945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.439 [2024-04-18 09:13:36.405103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.439 [2024-04-18 09:13:36.405148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:34.439 [2024-04-18 09:13:36.405185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:44:34.439 [2024-04-18 09:13:36.405251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.439 [2024-04-18 09:13:36.405358] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:34.439 [2024-04-18 09:13:36.405427] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:34.439 [2024-04-18 09:13:36.405475] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:34.439 [2024-04-18 09:13:36.405517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:34.439 [2024-04-18 09:13:36.405552] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:34.439 [2024-04-18 09:13:36.405588] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:34.439 [2024-04-18 09:13:36.405622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:44:34.439 [2024-04-18 09:13:36.405693] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:34.439 [2024-04-18 09:13:36.405727] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:44:34.439 [2024-04-18 09:13:36.405764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:34.439 [2024-04-18 09:13:36.405797] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:34.439 [2024-04-18 09:13:36.405836] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:44:34.439 [2024-04-18 09:13:36.405901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:34.439 [2024-04-18 09:13:36.405938] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:34.439 [2024-04-18 09:13:36.405972] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:44:34.439 [2024-04-18 09:13:36.406008] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:34.439 [2024-04-18 09:13:36.406044] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:34.439 [2024-04-18 09:13:36.406117] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:44:34.439 [2024-04-18 09:13:36.406152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:34.439 [2024-04-18 09:13:36.406201] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:44:34.439 [2024-04-18 09:13:36.406235] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:44:34.439 [2024-04-18 09:13:36.406273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:44:34.439 [2024-04-18 09:13:36.406331] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:34.439 [2024-04-18 09:13:36.406367] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:44:34.439 [2024-04-18 09:13:36.406415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:34.439 [2024-04-18 09:13:36.406452] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:34.439 [2024-04-18 09:13:36.406485] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:44:34.439 [2024-04-18 09:13:36.406537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:34.439 [2024-04-18 09:13:36.406601] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:34.439 [2024-04-18 09:13:36.406639] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:44:34.439 [2024-04-18 09:13:36.406673] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:34.439 [2024-04-18 09:13:36.406709] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:34.439 [2024-04-18 09:13:36.406742] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:44:34.439 [2024-04-18 09:13:36.406817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:34.439 [2024-04-18 09:13:36.406852] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:34.439 [2024-04-18 09:13:36.406888] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:44:34.439 [2024-04-18 09:13:36.406922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:34.439 [2024-04-18 09:13:36.406958] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:34.439 [2024-04-18 09:13:36.407035] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:44:34.439 [2024-04-18 09:13:36.407072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:34.439 [2024-04-18 09:13:36.407105] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:34.439 [2024-04-18 09:13:36.407142] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:34.439 [2024-04-18 09:13:36.407176] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:34.439 [2024-04-18 09:13:36.407257] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:34.439 [2024-04-18 09:13:36.407292] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:34.439 [2024-04-18 09:13:36.407329] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:34.439 [2024-04-18 09:13:36.407363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:34.439 [2024-04-18 09:13:36.407503] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:34.439 [2024-04-18 09:13:36.407548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:34.439 [2024-04-18 09:13:36.407587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:34.439 [2024-04-18 09:13:36.407661] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:34.439 [2024-04-18 09:13:36.407769] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:34.439 [2024-04-18 09:13:36.407829] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:44:34.439 [2024-04-18 09:13:36.407955] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:44:34.439 [2024-04-18 09:13:36.408014] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:44:34.439 [2024-04-18 09:13:36.408075] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:44:34.439 [2024-04-18 09:13:36.408166] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:44:34.439 [2024-04-18 09:13:36.408225] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:44:34.439 [2024-04-18 09:13:36.408281] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:44:34.439 [2024-04-18 09:13:36.408404] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:44:34.439 [2024-04-18 09:13:36.408465] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:44:34.439 [2024-04-18 09:13:36.408573] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:44:34.439 [2024-04-18 09:13:36.408635] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:44:34.440 [2024-04-18 09:13:36.408747] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:44:34.440 [2024-04-18 09:13:36.408811] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:44:34.440 [2024-04-18 09:13:36.408915] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:34.440 [2024-04-18 09:13:36.408984] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:34.440 [2024-04-18 09:13:36.409119] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:34.440 [2024-04-18 09:13:36.409216] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:34.440 [2024-04-18 09:13:36.409323] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:34.440 [2024-04-18 09:13:36.409397] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:34.440 [2024-04-18 09:13:36.409528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.440 [2024-04-18 09:13:36.409605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:34.440 [2024-04-18 09:13:36.409651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.207 ms 00:44:34.440 [2024-04-18 09:13:36.409729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.440 [2024-04-18 09:13:36.439969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.440 [2024-04-18 09:13:36.440220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:34.440 [2024-04-18 09:13:36.440355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.104 ms 00:44:34.440 [2024-04-18 09:13:36.440478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.440 [2024-04-18 09:13:36.440701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.440 [2024-04-18 09:13:36.440814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:34.440 [2024-04-18 09:13:36.440921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:44:34.440 [2024-04-18 09:13:36.440968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.440 [2024-04-18 09:13:36.505711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.440 [2024-04-18 09:13:36.505942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:34.440 [2024-04-18 09:13:36.506052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.636 ms 00:44:34.440 [2024-04-18 09:13:36.506096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.440 [2024-04-18 09:13:36.506290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.440 [2024-04-18 09:13:36.506397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:34.440 [2024-04-18 09:13:36.506487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:44:34.440 [2024-04-18 09:13:36.506533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.440 [2024-04-18 09:13:36.507096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.440 [2024-04-18 09:13:36.507220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:34.440 [2024-04-18 09:13:36.507308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:44:34.440 [2024-04-18 09:13:36.507359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.440 [2024-04-18 09:13:36.507584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.440 [2024-04-18 09:13:36.507693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:34.440 [2024-04-18 09:13:36.507783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:44:34.440 [2024-04-18 09:13:36.507867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.440 [2024-04-18 09:13:36.536677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.440 [2024-04-18 09:13:36.536940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:34.440 [2024-04-18 09:13:36.537058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.727 ms 00:44:34.440 [2024-04-18 09:13:36.537100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.698 [2024-04-18 09:13:36.561964] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:44:34.698 [2024-04-18 09:13:36.562253] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:44:34.698 [2024-04-18 09:13:36.562385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.698 [2024-04-18 09:13:36.562476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:44:34.698 [2024-04-18 09:13:36.562530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.020 ms 00:44:34.698 [2024-04-18 09:13:36.562600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.698 [2024-04-18 09:13:36.600733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.698 [2024-04-18 09:13:36.600972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:44:34.698 [2024-04-18 09:13:36.601078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.897 ms 00:44:34.698 [2024-04-18 09:13:36.601167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.698 [2024-04-18 09:13:36.626147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.698 [2024-04-18 09:13:36.626385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:44:34.698 [2024-04-18 09:13:36.626510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.788 ms 00:44:34.698 [2024-04-18 09:13:36.626553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.698 [2024-04-18 09:13:36.650874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.698 [2024-04-18 09:13:36.651138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:44:34.698 [2024-04-18 09:13:36.651254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.120 ms 00:44:34.698 [2024-04-18 09:13:36.651297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.698 [2024-04-18 09:13:36.652034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.698 [2024-04-18 09:13:36.652125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:34.698 [2024-04-18 09:13:36.652179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:44:34.698 [2024-04-18 09:13:36.652217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.698 [2024-04-18 09:13:36.764186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.698 [2024-04-18 09:13:36.764500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:44:34.698 [2024-04-18 09:13:36.764610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.898 ms 00:44:34.698 [2024-04-18 09:13:36.764655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.698 [2024-04-18 09:13:36.782530] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:44:34.956 [2024-04-18 09:13:36.801510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.956 [2024-04-18 09:13:36.801771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:34.956 [2024-04-18 09:13:36.801869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.679 ms 00:44:34.956 [2024-04-18 09:13:36.801920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.956 [2024-04-18 09:13:36.802070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.956 [2024-04-18 09:13:36.802120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:44:34.956 [2024-04-18 09:13:36.802157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:44:34.956 [2024-04-18 09:13:36.802198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.956 [2024-04-18 09:13:36.802369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.956 [2024-04-18 09:13:36.802491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:34.956 [2024-04-18 09:13:36.802585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:44:34.956 [2024-04-18 09:13:36.802631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.956 [2024-04-18 09:13:36.805183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.956 [2024-04-18 09:13:36.805319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:44:34.956 [2024-04-18 09:13:36.805426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.441 ms 00:44:34.956 [2024-04-18 09:13:36.805531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.956 [2024-04-18 09:13:36.805599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.956 [2024-04-18 09:13:36.805644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:34.956 [2024-04-18 09:13:36.805720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:34.956 [2024-04-18 09:13:36.805763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.956 [2024-04-18 09:13:36.805891] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:44:34.956 [2024-04-18 09:13:36.805942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.956 [2024-04-18 09:13:36.805988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:44:34.956 [2024-04-18 09:13:36.806066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:44:34.956 [2024-04-18 09:13:36.806106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.956 [2024-04-18 09:13:36.853609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.956 [2024-04-18 09:13:36.853893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:34.956 [2024-04-18 09:13:36.854050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.321 ms 00:44:34.956 [2024-04-18 09:13:36.854095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.956 [2024-04-18 09:13:36.854296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.956 [2024-04-18 09:13:36.854482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:34.956 [2024-04-18 09:13:36.854545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:44:34.956 [2024-04-18 09:13:36.854582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.956 [2024-04-18 09:13:36.855694] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:34.956 [2024-04-18 09:13:36.862811] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 501.771 ms, result 0 00:44:34.956 [2024-04-18 09:13:36.864298] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:34.956 Some configs were skipped because the RPC state that can call them passed over. 00:44:34.956 09:13:36 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:44:35.214 [2024-04-18 09:13:37.141747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:35.214 [2024-04-18 09:13:37.142037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:44:35.214 [2024-04-18 09:13:37.142142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.833 ms 00:44:35.214 [2024-04-18 09:13:37.142190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:35.214 [2024-04-18 09:13:37.142342] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 47.427 ms, result 0 00:44:35.214 true 00:44:35.214 09:13:37 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:44:35.472 [2024-04-18 09:13:37.406535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:35.472 [2024-04-18 09:13:37.406825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:44:35.472 [2024-04-18 09:13:37.406937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.374 ms 00:44:35.472 [2024-04-18 09:13:37.406981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:35.472 [2024-04-18 09:13:37.407072] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 48.922 ms, result 0 00:44:35.472 true 00:44:35.472 09:13:37 -- ftl/trim.sh@81 -- # killprocess 79184 00:44:35.472 09:13:37 -- common/autotest_common.sh@936 -- # '[' -z 79184 ']' 00:44:35.472 09:13:37 -- common/autotest_common.sh@940 -- # kill -0 79184 00:44:35.472 09:13:37 -- common/autotest_common.sh@941 -- # uname 00:44:35.472 09:13:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:44:35.472 09:13:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79184 00:44:35.472 09:13:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:44:35.472 09:13:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:44:35.472 09:13:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79184' 00:44:35.472 killing process with pid 79184 00:44:35.472 09:13:37 -- common/autotest_common.sh@955 -- # kill 79184 00:44:35.472 09:13:37 -- common/autotest_common.sh@960 -- # wait 79184 00:44:36.845 [2024-04-18 09:13:38.760946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.845 [2024-04-18 09:13:38.761230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:36.845 [2024-04-18 09:13:38.761335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:36.845 [2024-04-18 09:13:38.761397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.845 [2024-04-18 09:13:38.761534] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:44:36.845 [2024-04-18 09:13:38.765968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.845 [2024-04-18 09:13:38.766165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:36.845 [2024-04-18 09:13:38.766269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.358 ms 00:44:36.845 [2024-04-18 09:13:38.766363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.845 [2024-04-18 09:13:38.766732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.845 [2024-04-18 09:13:38.766866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:36.845 [2024-04-18 09:13:38.766966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:44:36.845 [2024-04-18 09:13:38.767125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.845 [2024-04-18 09:13:38.771180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.845 [2024-04-18 09:13:38.771357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:36.845 [2024-04-18 09:13:38.771478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.985 ms 00:44:36.845 [2024-04-18 09:13:38.771563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.845 [2024-04-18 09:13:38.778640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.845 [2024-04-18 09:13:38.778870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:44:36.845 [2024-04-18 09:13:38.778975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.987 ms 00:44:36.845 [2024-04-18 09:13:38.779017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.845 [2024-04-18 09:13:38.798125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.845 [2024-04-18 09:13:38.798388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:36.845 [2024-04-18 09:13:38.798526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.909 ms 00:44:36.845 [2024-04-18 09:13:38.798616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.845 [2024-04-18 09:13:38.810493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.845 [2024-04-18 09:13:38.810755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:36.845 [2024-04-18 09:13:38.810879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.749 ms 00:44:36.845 [2024-04-18 09:13:38.810923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.845 [2024-04-18 09:13:38.811213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.845 [2024-04-18 09:13:38.811339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:36.845 [2024-04-18 09:13:38.811482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:44:36.845 [2024-04-18 09:13:38.811525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.845 [2024-04-18 09:13:38.831822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.845 [2024-04-18 09:13:38.832104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:44:36.845 [2024-04-18 09:13:38.832207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.190 ms 00:44:36.845 [2024-04-18 09:13:38.832250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.845 [2024-04-18 09:13:38.852016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.845 [2024-04-18 09:13:38.852283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:44:36.845 [2024-04-18 09:13:38.852411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.576 ms 00:44:36.845 [2024-04-18 09:13:38.852459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.845 [2024-04-18 09:13:38.871673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.845 [2024-04-18 09:13:38.871928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:36.845 [2024-04-18 09:13:38.872087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.969 ms 00:44:36.845 [2024-04-18 09:13:38.872141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.845 [2024-04-18 09:13:38.891779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.845 [2024-04-18 09:13:38.892076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:36.845 [2024-04-18 09:13:38.892182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.458 ms 00:44:36.845 [2024-04-18 09:13:38.892274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.845 [2024-04-18 09:13:38.892408] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:36.845 [2024-04-18 09:13:38.892511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:36.845 [2024-04-18 09:13:38.892583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:36.845 [2024-04-18 09:13:38.892685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:36.845 [2024-04-18 09:13:38.892790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:36.845 [2024-04-18 09:13:38.892853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:36.845 [2024-04-18 09:13:38.892948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:36.845 [2024-04-18 09:13:38.893051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:36.845 [2024-04-18 09:13:38.893147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.893210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.893368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.893487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.893555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.893657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.893724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.893782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.893906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.893966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.894074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.894138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.894199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.894344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.894466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.894527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.894633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.894694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.894799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.894904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.895004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.895065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.895170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.895230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.895328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.895451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.895556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.895616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.895751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.895813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.895976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.896102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.896171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.896276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.896392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.896508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.896606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.896705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.896824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.897010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.897122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.897215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.897315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.897424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.897529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.897622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.897724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.897820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.897912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.898003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.898109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.898206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.898307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.898409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.898478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.898553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.898656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.898803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.898870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.898974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.899039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.899155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.899215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.899308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.899385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.899510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.899582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.899689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.899753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.899832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.899965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.900137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.900204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.900285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.900405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.900557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.900621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.900711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.900823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.900928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.901025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.901143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.901302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.901473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.901579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.901684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.901779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.901879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.901982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.902091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:36.846 [2024-04-18 09:13:38.902215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:36.847 [2024-04-18 09:13:38.902332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:36.847 [2024-04-18 09:13:38.902455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:36.847 [2024-04-18 09:13:38.902575] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:36.847 [2024-04-18 09:13:38.902730] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4d4e12f0-6a8e-4451-b5ee-b80718c55513 00:44:36.847 [2024-04-18 09:13:38.902797] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:36.847 [2024-04-18 09:13:38.902881] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:36.847 [2024-04-18 09:13:38.902921] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:36.847 [2024-04-18 09:13:38.902965] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:36.847 [2024-04-18 09:13:38.903041] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:36.847 [2024-04-18 09:13:38.903091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:36.847 [2024-04-18 09:13:38.903127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:36.847 [2024-04-18 09:13:38.903202] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:36.847 [2024-04-18 09:13:38.903241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:36.847 [2024-04-18 09:13:38.903282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.847 [2024-04-18 09:13:38.903359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:36.847 [2024-04-18 09:13:38.903426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.877 ms 00:44:36.847 [2024-04-18 09:13:38.903464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.847 [2024-04-18 09:13:38.927552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.847 [2024-04-18 09:13:38.927803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:36.847 [2024-04-18 09:13:38.927941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.897 ms 00:44:36.847 [2024-04-18 09:13:38.927990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.847 [2024-04-18 09:13:38.928393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.847 [2024-04-18 09:13:38.928508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:36.847 [2024-04-18 09:13:38.928602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:44:36.847 [2024-04-18 09:13:38.928705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.104 [2024-04-18 09:13:39.011888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.104 [2024-04-18 09:13:39.012208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:37.104 [2024-04-18 09:13:39.012322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.104 [2024-04-18 09:13:39.012376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.104 [2024-04-18 09:13:39.012551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.104 [2024-04-18 09:13:39.012677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:37.104 [2024-04-18 09:13:39.012740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.104 [2024-04-18 09:13:39.012775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.104 [2024-04-18 09:13:39.012868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.104 [2024-04-18 09:13:39.012909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:37.104 [2024-04-18 09:13:39.012947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.104 [2024-04-18 09:13:39.013064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.104 [2024-04-18 09:13:39.013154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.104 [2024-04-18 09:13:39.013191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:37.104 [2024-04-18 09:13:39.013231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.104 [2024-04-18 09:13:39.013268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.104 [2024-04-18 09:13:39.157421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.104 [2024-04-18 09:13:39.157662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:37.104 [2024-04-18 09:13:39.157765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.104 [2024-04-18 09:13:39.157811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.362 [2024-04-18 09:13:39.213746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.362 [2024-04-18 09:13:39.214005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:37.362 [2024-04-18 09:13:39.214169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.362 [2024-04-18 09:13:39.214212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.362 [2024-04-18 09:13:39.214342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.362 [2024-04-18 09:13:39.214573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:37.362 [2024-04-18 09:13:39.214626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.362 [2024-04-18 09:13:39.214662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.362 [2024-04-18 09:13:39.214735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.362 [2024-04-18 09:13:39.214816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:37.362 [2024-04-18 09:13:39.214901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.362 [2024-04-18 09:13:39.214938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.362 [2024-04-18 09:13:39.215110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.362 [2024-04-18 09:13:39.215196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:37.362 [2024-04-18 09:13:39.215243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.362 [2024-04-18 09:13:39.215277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.362 [2024-04-18 09:13:39.215351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.363 [2024-04-18 09:13:39.215416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:37.363 [2024-04-18 09:13:39.215464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.363 [2024-04-18 09:13:39.215584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.363 [2024-04-18 09:13:39.215689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.363 [2024-04-18 09:13:39.215730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:37.363 [2024-04-18 09:13:39.215770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.363 [2024-04-18 09:13:39.215807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.363 [2024-04-18 09:13:39.215887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.363 [2024-04-18 09:13:39.215991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:37.363 [2024-04-18 09:13:39.216069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.363 [2024-04-18 09:13:39.216105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.363 [2024-04-18 09:13:39.216295] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 455.309 ms, result 0 00:44:38.737 09:13:40 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:44:38.737 09:13:40 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:38.994 [2024-04-18 09:13:40.849030] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:44:38.994 [2024-04-18 09:13:40.849439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79259 ] 00:44:38.994 [2024-04-18 09:13:41.032066] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:39.251 [2024-04-18 09:13:41.313643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:39.815 [2024-04-18 09:13:41.795768] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:39.815 [2024-04-18 09:13:41.796061] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:40.075 [2024-04-18 09:13:41.960184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.075 [2024-04-18 09:13:41.960464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:40.075 [2024-04-18 09:13:41.960574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:44:40.075 [2024-04-18 09:13:41.960626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.075 [2024-04-18 09:13:41.964683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.075 [2024-04-18 09:13:41.964879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:40.075 [2024-04-18 09:13:41.964978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.978 ms 00:44:40.075 [2024-04-18 09:13:41.965023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.075 [2024-04-18 09:13:41.965270] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:40.075 [2024-04-18 09:13:41.966719] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:40.075 [2024-04-18 09:13:41.966892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.075 [2024-04-18 09:13:41.966980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:40.075 [2024-04-18 09:13:41.967029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.633 ms 00:44:40.075 [2024-04-18 09:13:41.967147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.075 [2024-04-18 09:13:41.968885] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:44:40.075 [2024-04-18 09:13:41.993634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.075 [2024-04-18 09:13:41.993906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:44:40.075 [2024-04-18 09:13:41.994029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.746 ms 00:44:40.075 [2024-04-18 09:13:41.994073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.075 [2024-04-18 09:13:41.994364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.075 [2024-04-18 09:13:41.994507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:44:40.075 [2024-04-18 09:13:41.994593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:44:40.075 [2024-04-18 09:13:41.994642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.075 [2024-04-18 09:13:42.002498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.075 [2024-04-18 09:13:42.002729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:40.075 [2024-04-18 09:13:42.002880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.737 ms 00:44:40.075 [2024-04-18 09:13:42.002923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.075 [2024-04-18 09:13:42.003113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.075 [2024-04-18 09:13:42.003160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:40.075 [2024-04-18 09:13:42.003201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:44:40.075 [2024-04-18 09:13:42.003286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.075 [2024-04-18 09:13:42.003425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.075 [2024-04-18 09:13:42.003537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:40.075 [2024-04-18 09:13:42.003617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:44:40.075 [2024-04-18 09:13:42.003657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.075 [2024-04-18 09:13:42.003763] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:44:40.075 [2024-04-18 09:13:42.010675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.075 [2024-04-18 09:13:42.010849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:40.075 [2024-04-18 09:13:42.010979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.921 ms 00:44:40.075 [2024-04-18 09:13:42.011023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.075 [2024-04-18 09:13:42.011148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.075 [2024-04-18 09:13:42.011198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:40.075 [2024-04-18 09:13:42.011235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:44:40.075 [2024-04-18 09:13:42.011318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.075 [2024-04-18 09:13:42.011404] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:44:40.075 [2024-04-18 09:13:42.011536] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:44:40.075 [2024-04-18 09:13:42.011627] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:44:40.075 [2024-04-18 09:13:42.011749] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:44:40.075 [2024-04-18 09:13:42.011877] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:44:40.075 [2024-04-18 09:13:42.012018] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:40.075 [2024-04-18 09:13:42.012081] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:44:40.075 [2024-04-18 09:13:42.012140] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:40.075 [2024-04-18 09:13:42.012261] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:40.075 [2024-04-18 09:13:42.012317] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:44:40.075 [2024-04-18 09:13:42.012351] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:40.075 [2024-04-18 09:13:42.012406] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:44:40.075 [2024-04-18 09:13:42.012445] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:44:40.075 [2024-04-18 09:13:42.012541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.075 [2024-04-18 09:13:42.012586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:40.075 [2024-04-18 09:13:42.012627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.140 ms 00:44:40.075 [2024-04-18 09:13:42.012666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.075 [2024-04-18 09:13:42.012770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.075 [2024-04-18 09:13:42.012833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:40.075 [2024-04-18 09:13:42.012877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:44:40.075 [2024-04-18 09:13:42.012911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.075 [2024-04-18 09:13:42.013017] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:40.075 [2024-04-18 09:13:42.013121] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:40.075 [2024-04-18 09:13:42.013164] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:40.075 [2024-04-18 09:13:42.013206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:40.075 [2024-04-18 09:13:42.013241] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:40.075 [2024-04-18 09:13:42.013330] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:40.075 [2024-04-18 09:13:42.013365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:44:40.075 [2024-04-18 09:13:42.013415] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:40.075 [2024-04-18 09:13:42.013494] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:44:40.075 [2024-04-18 09:13:42.013534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:40.075 [2024-04-18 09:13:42.013627] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:40.075 [2024-04-18 09:13:42.013671] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:44:40.075 [2024-04-18 09:13:42.013705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:40.075 [2024-04-18 09:13:42.013739] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:40.075 [2024-04-18 09:13:42.013774] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:44:40.075 [2024-04-18 09:13:42.013808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:40.075 [2024-04-18 09:13:42.013924] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:40.075 [2024-04-18 09:13:42.013966] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:44:40.075 [2024-04-18 09:13:42.014001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:40.075 [2024-04-18 09:13:42.014034] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:44:40.075 [2024-04-18 09:13:42.014069] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:44:40.075 [2024-04-18 09:13:42.014152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:44:40.075 [2024-04-18 09:13:42.014187] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:40.075 [2024-04-18 09:13:42.014221] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:44:40.075 [2024-04-18 09:13:42.014296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:40.075 [2024-04-18 09:13:42.014335] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:40.075 [2024-04-18 09:13:42.014423] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:44:40.075 [2024-04-18 09:13:42.014464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:40.075 [2024-04-18 09:13:42.014534] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:40.076 [2024-04-18 09:13:42.014574] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:44:40.076 [2024-04-18 09:13:42.014608] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:40.076 [2024-04-18 09:13:42.014660] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:40.076 [2024-04-18 09:13:42.014741] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:44:40.076 [2024-04-18 09:13:42.014786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:40.076 [2024-04-18 09:13:42.014820] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:40.076 [2024-04-18 09:13:42.014855] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:44:40.076 [2024-04-18 09:13:42.014889] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:40.076 [2024-04-18 09:13:42.014923] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:40.076 [2024-04-18 09:13:42.015010] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:44:40.076 [2024-04-18 09:13:42.015050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:40.076 [2024-04-18 09:13:42.015084] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:40.076 [2024-04-18 09:13:42.015119] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:40.076 [2024-04-18 09:13:42.015155] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:40.076 [2024-04-18 09:13:42.015189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:40.076 [2024-04-18 09:13:42.015225] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:40.076 [2024-04-18 09:13:42.015350] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:40.076 [2024-04-18 09:13:42.015411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:40.076 [2024-04-18 09:13:42.015448] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:40.076 [2024-04-18 09:13:42.015482] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:40.076 [2024-04-18 09:13:42.015517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:40.076 [2024-04-18 09:13:42.015553] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:40.076 [2024-04-18 09:13:42.015665] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:40.076 [2024-04-18 09:13:42.015773] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:44:40.076 [2024-04-18 09:13:42.015835] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:44:40.076 [2024-04-18 09:13:42.015945] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:44:40.076 [2024-04-18 09:13:42.016003] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:44:40.076 [2024-04-18 09:13:42.016059] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:44:40.076 [2024-04-18 09:13:42.016174] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:44:40.076 [2024-04-18 09:13:42.016233] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:44:40.076 [2024-04-18 09:13:42.016289] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:44:40.076 [2024-04-18 09:13:42.016393] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:44:40.076 [2024-04-18 09:13:42.016510] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:44:40.076 [2024-04-18 09:13:42.016620] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:44:40.076 [2024-04-18 09:13:42.016679] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:44:40.076 [2024-04-18 09:13:42.016736] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:44:40.076 [2024-04-18 09:13:42.016945] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:40.076 [2024-04-18 09:13:42.017002] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:40.076 [2024-04-18 09:13:42.017059] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:40.076 [2024-04-18 09:13:42.017116] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:40.076 [2024-04-18 09:13:42.017171] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:40.076 [2024-04-18 09:13:42.017281] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:40.076 [2024-04-18 09:13:42.017347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.076 [2024-04-18 09:13:42.017394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:40.076 [2024-04-18 09:13:42.017440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.372 ms 00:44:40.076 [2024-04-18 09:13:42.017476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.076 [2024-04-18 09:13:42.047023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.076 [2024-04-18 09:13:42.047273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:40.076 [2024-04-18 09:13:42.047383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.375 ms 00:44:40.076 [2024-04-18 09:13:42.047428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.076 [2024-04-18 09:13:42.047681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.076 [2024-04-18 09:13:42.047799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:40.076 [2024-04-18 09:13:42.047888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:44:40.076 [2024-04-18 09:13:42.047991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.076 [2024-04-18 09:13:42.120126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.076 [2024-04-18 09:13:42.120385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:40.076 [2024-04-18 09:13:42.120483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.062 ms 00:44:40.076 [2024-04-18 09:13:42.120526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.076 [2024-04-18 09:13:42.120671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.076 [2024-04-18 09:13:42.120766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:40.076 [2024-04-18 09:13:42.120822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:44:40.076 [2024-04-18 09:13:42.120857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.076 [2024-04-18 09:13:42.121369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.076 [2024-04-18 09:13:42.121513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:40.076 [2024-04-18 09:13:42.121593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:44:40.076 [2024-04-18 09:13:42.121632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.076 [2024-04-18 09:13:42.121794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.076 [2024-04-18 09:13:42.121836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:40.076 [2024-04-18 09:13:42.121914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:44:40.076 [2024-04-18 09:13:42.121954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.076 [2024-04-18 09:13:42.149749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.076 [2024-04-18 09:13:42.149985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:40.076 [2024-04-18 09:13:42.150094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.739 ms 00:44:40.076 [2024-04-18 09:13:42.150137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.076 [2024-04-18 09:13:42.175138] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:44:40.076 [2024-04-18 09:13:42.175454] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:44:40.076 [2024-04-18 09:13:42.175642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.076 [2024-04-18 09:13:42.175682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:44:40.076 [2024-04-18 09:13:42.175722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.298 ms 00:44:40.076 [2024-04-18 09:13:42.175756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.334 [2024-04-18 09:13:42.214956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.334 [2024-04-18 09:13:42.215236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:44:40.334 [2024-04-18 09:13:42.215335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.978 ms 00:44:40.334 [2024-04-18 09:13:42.215449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.334 [2024-04-18 09:13:42.239570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.334 [2024-04-18 09:13:42.239841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:44:40.334 [2024-04-18 09:13:42.239991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.892 ms 00:44:40.334 [2024-04-18 09:13:42.240066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.334 [2024-04-18 09:13:42.261581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.334 [2024-04-18 09:13:42.261897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:44:40.334 [2024-04-18 09:13:42.262022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.230 ms 00:44:40.334 [2024-04-18 09:13:42.262095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.334 [2024-04-18 09:13:42.262909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.334 [2024-04-18 09:13:42.263073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:40.334 [2024-04-18 09:13:42.263246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:44:40.334 [2024-04-18 09:13:42.263322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.334 [2024-04-18 09:13:42.376442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.334 [2024-04-18 09:13:42.376648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:44:40.334 [2024-04-18 09:13:42.376807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.982 ms 00:44:40.334 [2024-04-18 09:13:42.376938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.334 [2024-04-18 09:13:42.394097] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:44:40.334 [2024-04-18 09:13:42.417941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.334 [2024-04-18 09:13:42.418211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:40.334 [2024-04-18 09:13:42.418314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.704 ms 00:44:40.334 [2024-04-18 09:13:42.418358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.334 [2024-04-18 09:13:42.418584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.334 [2024-04-18 09:13:42.418732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:44:40.334 [2024-04-18 09:13:42.418824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:44:40.334 [2024-04-18 09:13:42.418868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.334 [2024-04-18 09:13:42.418995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.334 [2024-04-18 09:13:42.419140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:40.334 [2024-04-18 09:13:42.419257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:44:40.335 [2024-04-18 09:13:42.419303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.335 [2024-04-18 09:13:42.421813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.335 [2024-04-18 09:13:42.421944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:44:40.335 [2024-04-18 09:13:42.422035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.416 ms 00:44:40.335 [2024-04-18 09:13:42.422077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.335 [2024-04-18 09:13:42.422195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.335 [2024-04-18 09:13:42.422287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:40.335 [2024-04-18 09:13:42.422384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:40.335 [2024-04-18 09:13:42.422498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.335 [2024-04-18 09:13:42.422648] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:44:40.335 [2024-04-18 09:13:42.422746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.335 [2024-04-18 09:13:42.422790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:44:40.335 [2024-04-18 09:13:42.422889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:44:40.335 [2024-04-18 09:13:42.422932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.592 [2024-04-18 09:13:42.471322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.592 [2024-04-18 09:13:42.471540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:40.592 [2024-04-18 09:13:42.471642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.252 ms 00:44:40.592 [2024-04-18 09:13:42.471749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.592 [2024-04-18 09:13:42.471976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:40.592 [2024-04-18 09:13:42.472030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:40.592 [2024-04-18 09:13:42.472115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:44:40.592 [2024-04-18 09:13:42.472157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:40.592 [2024-04-18 09:13:42.473299] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:40.592 [2024-04-18 09:13:42.480531] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 512.779 ms, result 0 00:44:40.592 [2024-04-18 09:13:42.481465] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:40.592 [2024-04-18 09:13:42.503841] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:48.203  Copying: 35/256 [MB] (35 MBps) Copying: 69/256 [MB] (33 MBps) Copying: 101/256 [MB] (32 MBps) Copying: 134/256 [MB] (32 MBps) Copying: 169/256 [MB] (34 MBps) Copying: 203/256 [MB] (34 MBps) Copying: 236/256 [MB] (33 MBps) Copying: 256/256 [MB] (average 33 MBps)[2024-04-18 09:13:50.068331] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:48.203 [2024-04-18 09:13:50.086230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.203 [2024-04-18 09:13:50.086489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:48.203 [2024-04-18 09:13:50.086627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:44:48.203 [2024-04-18 09:13:50.086670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.203 [2024-04-18 09:13:50.086788] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:44:48.203 [2024-04-18 09:13:50.091155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.203 [2024-04-18 09:13:50.091326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:48.203 [2024-04-18 09:13:50.091442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.295 ms 00:44:48.203 [2024-04-18 09:13:50.091485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.203 [2024-04-18 09:13:50.091789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.203 [2024-04-18 09:13:50.091837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:48.203 [2024-04-18 09:13:50.091877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:44:48.203 [2024-04-18 09:13:50.091995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.203 [2024-04-18 09:13:50.095586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.203 [2024-04-18 09:13:50.095723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:48.203 [2024-04-18 09:13:50.095812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.485 ms 00:44:48.203 [2024-04-18 09:13:50.095851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.203 [2024-04-18 09:13:50.102699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.203 [2024-04-18 09:13:50.102853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:44:48.203 [2024-04-18 09:13:50.102945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.781 ms 00:44:48.203 [2024-04-18 09:13:50.102989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.203 [2024-04-18 09:13:50.150767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.203 [2024-04-18 09:13:50.151029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:48.203 [2024-04-18 09:13:50.151144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.674 ms 00:44:48.203 [2024-04-18 09:13:50.151185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.203 [2024-04-18 09:13:50.178035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.203 [2024-04-18 09:13:50.178302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:48.203 [2024-04-18 09:13:50.178426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.718 ms 00:44:48.203 [2024-04-18 09:13:50.178469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.203 [2024-04-18 09:13:50.178683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.203 [2024-04-18 09:13:50.178763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:48.203 [2024-04-18 09:13:50.178802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:44:48.203 [2024-04-18 09:13:50.178882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.203 [2024-04-18 09:13:50.225117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.203 [2024-04-18 09:13:50.225368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:44:48.203 [2024-04-18 09:13:50.225464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.174 ms 00:44:48.203 [2024-04-18 09:13:50.225502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.203 [2024-04-18 09:13:50.272859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.203 [2024-04-18 09:13:50.273139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:44:48.203 [2024-04-18 09:13:50.273232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.235 ms 00:44:48.203 [2024-04-18 09:13:50.273272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.461 [2024-04-18 09:13:50.319878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.461 [2024-04-18 09:13:50.320153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:48.461 [2024-04-18 09:13:50.320311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.472 ms 00:44:48.461 [2024-04-18 09:13:50.320353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.461 [2024-04-18 09:13:50.365612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.461 [2024-04-18 09:13:50.365858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:48.461 [2024-04-18 09:13:50.366035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.992 ms 00:44:48.461 [2024-04-18 09:13:50.366073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.461 [2024-04-18 09:13:50.366207] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:48.461 [2024-04-18 09:13:50.366263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:48.461 [2024-04-18 09:13:50.366405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:48.461 [2024-04-18 09:13:50.366520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:48.461 [2024-04-18 09:13:50.366588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.366639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.366690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.366863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.366920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.367092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.367152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.367267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.367324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.367394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.367518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.367575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.367629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.367737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.367794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.367848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.367984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.368041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.368095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.368209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.368266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.368319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.368457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.368514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.368568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.368674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.368730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.368784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.368902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.368958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.369012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.369127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.369183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.369237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.369349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.369417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.369471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.369579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.369637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.369691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.369744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.369899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.369960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.370087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.370146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.370252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.370381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.370489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.370551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.370677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.370793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.370900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.370962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.371113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.371219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.371391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.371452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.371554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.371616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.371713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.371778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.371834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.371964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.372021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.372150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.372252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.372311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.372457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.372538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.372596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.372650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.372766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.372825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.372926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.372985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.373093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.373149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.373240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.373298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.373349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.373473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.373530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:48.462 [2024-04-18 09:13:50.373685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:48.463 [2024-04-18 09:13:50.373781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:48.463 [2024-04-18 09:13:50.373899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:48.463 [2024-04-18 09:13:50.374023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:48.463 [2024-04-18 09:13:50.374076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:48.463 [2024-04-18 09:13:50.374128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:48.463 [2024-04-18 09:13:50.374232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:48.463 [2024-04-18 09:13:50.374327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:48.463 [2024-04-18 09:13:50.374399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:48.463 [2024-04-18 09:13:50.374454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:48.463 [2024-04-18 09:13:50.374575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:48.463 [2024-04-18 09:13:50.374631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:48.463 [2024-04-18 09:13:50.374758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:48.463 [2024-04-18 09:13:50.374814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:48.463 [2024-04-18 09:13:50.374903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:48.463 [2024-04-18 09:13:50.375006] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:48.463 [2024-04-18 09:13:50.375049] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4d4e12f0-6a8e-4451-b5ee-b80718c55513 00:44:48.463 [2024-04-18 09:13:50.375179] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:48.463 [2024-04-18 09:13:50.375216] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:48.463 [2024-04-18 09:13:50.375248] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:48.463 [2024-04-18 09:13:50.375282] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:48.463 [2024-04-18 09:13:50.375360] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:48.463 [2024-04-18 09:13:50.375413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:48.463 [2024-04-18 09:13:50.375449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:48.463 [2024-04-18 09:13:50.375482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:48.463 [2024-04-18 09:13:50.375560] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:48.463 [2024-04-18 09:13:50.375596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.463 [2024-04-18 09:13:50.375631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:48.463 [2024-04-18 09:13:50.375702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.390 ms 00:44:48.463 [2024-04-18 09:13:50.375814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.463 [2024-04-18 09:13:50.398869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.463 [2024-04-18 09:13:50.399117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:48.463 [2024-04-18 09:13:50.399211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.979 ms 00:44:48.463 [2024-04-18 09:13:50.399251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.463 [2024-04-18 09:13:50.399661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.463 [2024-04-18 09:13:50.399774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:48.463 [2024-04-18 09:13:50.399860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:44:48.463 [2024-04-18 09:13:50.399958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.463 [2024-04-18 09:13:50.466330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:48.463 [2024-04-18 09:13:50.466617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:48.463 [2024-04-18 09:13:50.466719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:48.463 [2024-04-18 09:13:50.466758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.463 [2024-04-18 09:13:50.466891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:48.463 [2024-04-18 09:13:50.467002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:48.463 [2024-04-18 09:13:50.467039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:48.463 [2024-04-18 09:13:50.467077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.463 [2024-04-18 09:13:50.467167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:48.463 [2024-04-18 09:13:50.467212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:48.463 [2024-04-18 09:13:50.467288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:48.463 [2024-04-18 09:13:50.467322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.463 [2024-04-18 09:13:50.467365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:48.463 [2024-04-18 09:13:50.467411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:48.463 [2024-04-18 09:13:50.467444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:48.463 [2024-04-18 09:13:50.467477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.721 [2024-04-18 09:13:50.602646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:48.721 [2024-04-18 09:13:50.602903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:48.721 [2024-04-18 09:13:50.603054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:48.721 [2024-04-18 09:13:50.603098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.721 [2024-04-18 09:13:50.657084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:48.721 [2024-04-18 09:13:50.657338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:48.721 [2024-04-18 09:13:50.657551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:48.721 [2024-04-18 09:13:50.657607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.721 [2024-04-18 09:13:50.657707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:48.721 [2024-04-18 09:13:50.657762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:48.721 [2024-04-18 09:13:50.657798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:48.721 [2024-04-18 09:13:50.657832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.721 [2024-04-18 09:13:50.657886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:48.721 [2024-04-18 09:13:50.657922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:48.721 [2024-04-18 09:13:50.657956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:48.721 [2024-04-18 09:13:50.657989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.721 [2024-04-18 09:13:50.658224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:48.721 [2024-04-18 09:13:50.658321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:48.721 [2024-04-18 09:13:50.658410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:48.721 [2024-04-18 09:13:50.658488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.721 [2024-04-18 09:13:50.658575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:48.721 [2024-04-18 09:13:50.658710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:48.721 [2024-04-18 09:13:50.658754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:48.721 [2024-04-18 09:13:50.658795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.721 [2024-04-18 09:13:50.658921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:48.721 [2024-04-18 09:13:50.658971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:48.721 [2024-04-18 09:13:50.659096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:48.721 [2024-04-18 09:13:50.659139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.721 [2024-04-18 09:13:50.659218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:48.721 [2024-04-18 09:13:50.659261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:48.721 [2024-04-18 09:13:50.659299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:48.721 [2024-04-18 09:13:50.659385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.721 [2024-04-18 09:13:50.659579] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 573.346 ms, result 0 00:44:50.101 00:44:50.101 00:44:50.101 09:13:52 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:44:50.101 09:13:52 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:44:50.687 09:13:52 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:50.687 [2024-04-18 09:13:52.775771] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:44:50.687 [2024-04-18 09:13:52.776218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79386 ] 00:44:50.946 [2024-04-18 09:13:52.948336] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:51.231 [2024-04-18 09:13:53.219049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:51.795 [2024-04-18 09:13:53.681857] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:51.795 [2024-04-18 09:13:53.682164] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:51.795 [2024-04-18 09:13:53.846238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.795 [2024-04-18 09:13:53.846568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:51.795 [2024-04-18 09:13:53.846695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:44:51.795 [2024-04-18 09:13:53.846755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.795 [2024-04-18 09:13:53.851436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.795 [2024-04-18 09:13:53.851650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:51.795 [2024-04-18 09:13:53.851763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.552 ms 00:44:51.795 [2024-04-18 09:13:53.851872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.795 [2024-04-18 09:13:53.852201] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:51.795 [2024-04-18 09:13:53.854027] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:51.795 [2024-04-18 09:13:53.854222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.795 [2024-04-18 09:13:53.854445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:51.795 [2024-04-18 09:13:53.854509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.033 ms 00:44:51.795 [2024-04-18 09:13:53.854576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.795 [2024-04-18 09:13:53.856542] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:44:51.795 [2024-04-18 09:13:53.880780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.795 [2024-04-18 09:13:53.881052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:44:51.795 [2024-04-18 09:13:53.881149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.235 ms 00:44:51.795 [2024-04-18 09:13:53.881265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.795 [2024-04-18 09:13:53.881467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.795 [2024-04-18 09:13:53.881532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:44:51.795 [2024-04-18 09:13:53.881647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:44:51.795 [2024-04-18 09:13:53.881701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.795 [2024-04-18 09:13:53.889777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.795 [2024-04-18 09:13:53.890021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:51.795 [2024-04-18 09:13:53.890120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.927 ms 00:44:51.795 [2024-04-18 09:13:53.890222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.795 [2024-04-18 09:13:53.890453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.795 [2024-04-18 09:13:53.890568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:51.795 [2024-04-18 09:13:53.890659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:44:51.795 [2024-04-18 09:13:53.890759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.795 [2024-04-18 09:13:53.890843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.795 [2024-04-18 09:13:53.890884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:51.795 [2024-04-18 09:13:53.890964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:44:51.795 [2024-04-18 09:13:53.891005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.795 [2024-04-18 09:13:53.891100] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:44:52.055 [2024-04-18 09:13:53.898104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.055 [2024-04-18 09:13:53.898315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:52.055 [2024-04-18 09:13:53.898423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.012 ms 00:44:52.055 [2024-04-18 09:13:53.898539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.055 [2024-04-18 09:13:53.898688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.055 [2024-04-18 09:13:53.898740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:52.055 [2024-04-18 09:13:53.898826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:44:52.055 [2024-04-18 09:13:53.898902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.055 [2024-04-18 09:13:53.898968] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:44:52.055 [2024-04-18 09:13:53.899081] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:44:52.055 [2024-04-18 09:13:53.899174] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:44:52.055 [2024-04-18 09:13:53.899241] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:44:52.055 [2024-04-18 09:13:53.899425] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:44:52.055 [2024-04-18 09:13:53.899542] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:52.055 [2024-04-18 09:13:53.899645] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:44:52.055 [2024-04-18 09:13:53.899708] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:52.055 [2024-04-18 09:13:53.899791] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:52.055 [2024-04-18 09:13:53.899848] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:44:52.055 [2024-04-18 09:13:53.899883] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:52.055 [2024-04-18 09:13:53.899975] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:44:52.055 [2024-04-18 09:13:53.900017] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:44:52.055 [2024-04-18 09:13:53.900053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.055 [2024-04-18 09:13:53.900148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:52.055 [2024-04-18 09:13:53.900198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:44:52.055 [2024-04-18 09:13:53.900281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.055 [2024-04-18 09:13:53.900471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.055 [2024-04-18 09:13:53.900574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:52.055 [2024-04-18 09:13:53.900659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:44:52.055 [2024-04-18 09:13:53.900749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.055 [2024-04-18 09:13:53.900878] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:52.055 [2024-04-18 09:13:53.900999] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:52.055 [2024-04-18 09:13:53.901048] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:52.055 [2024-04-18 09:13:53.901145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:52.055 [2024-04-18 09:13:53.901187] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:52.055 [2024-04-18 09:13:53.901222] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:52.055 [2024-04-18 09:13:53.901281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:44:52.055 [2024-04-18 09:13:53.901316] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:52.055 [2024-04-18 09:13:53.901351] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:44:52.055 [2024-04-18 09:13:53.901404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:52.055 [2024-04-18 09:13:53.901477] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:52.055 [2024-04-18 09:13:53.901516] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:44:52.055 [2024-04-18 09:13:53.901551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:52.055 [2024-04-18 09:13:53.901585] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:52.055 [2024-04-18 09:13:53.901619] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:44:52.055 [2024-04-18 09:13:53.901676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:52.055 [2024-04-18 09:13:53.901710] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:52.055 [2024-04-18 09:13:53.901744] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:44:52.055 [2024-04-18 09:13:53.901777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:52.055 [2024-04-18 09:13:53.901812] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:44:52.055 [2024-04-18 09:13:53.901866] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:44:52.055 [2024-04-18 09:13:53.901901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:44:52.055 [2024-04-18 09:13:53.901936] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:52.055 [2024-04-18 09:13:53.901970] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:44:52.055 [2024-04-18 09:13:53.902004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:52.055 [2024-04-18 09:13:53.902058] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:52.055 [2024-04-18 09:13:53.902093] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:44:52.055 [2024-04-18 09:13:53.902126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:52.055 [2024-04-18 09:13:53.902160] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:52.055 [2024-04-18 09:13:53.902193] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:44:52.055 [2024-04-18 09:13:53.902244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:52.055 [2024-04-18 09:13:53.902286] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:52.055 [2024-04-18 09:13:53.902320] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:44:52.055 [2024-04-18 09:13:53.902353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:52.055 [2024-04-18 09:13:53.902399] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:52.055 [2024-04-18 09:13:53.902444] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:44:52.055 [2024-04-18 09:13:53.902546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:52.055 [2024-04-18 09:13:53.902587] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:52.055 [2024-04-18 09:13:53.902622] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:44:52.055 [2024-04-18 09:13:53.902720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:52.055 [2024-04-18 09:13:53.902759] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:52.055 [2024-04-18 09:13:53.902795] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:52.055 [2024-04-18 09:13:53.902861] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:52.055 [2024-04-18 09:13:53.902897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:52.055 [2024-04-18 09:13:53.902933] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:52.055 [2024-04-18 09:13:53.902968] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:52.055 [2024-04-18 09:13:53.903049] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:52.055 [2024-04-18 09:13:53.903090] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:52.055 [2024-04-18 09:13:53.903125] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:52.055 [2024-04-18 09:13:53.903199] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:52.055 [2024-04-18 09:13:53.903273] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:52.055 [2024-04-18 09:13:53.903337] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:52.055 [2024-04-18 09:13:53.903463] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:44:52.055 [2024-04-18 09:13:53.903525] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:44:52.055 [2024-04-18 09:13:53.903628] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:44:52.055 [2024-04-18 09:13:53.903684] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:44:52.056 [2024-04-18 09:13:53.903811] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:44:52.056 [2024-04-18 09:13:53.903874] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:44:52.056 [2024-04-18 09:13:53.903982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:44:52.056 [2024-04-18 09:13:53.904082] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:44:52.056 [2024-04-18 09:13:53.904235] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:44:52.056 [2024-04-18 09:13:53.904293] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:44:52.056 [2024-04-18 09:13:53.904396] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:44:52.056 [2024-04-18 09:13:53.904454] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:44:52.056 [2024-04-18 09:13:53.904548] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:44:52.056 [2024-04-18 09:13:53.904604] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:52.056 [2024-04-18 09:13:53.904662] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:52.056 [2024-04-18 09:13:53.904758] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:52.056 [2024-04-18 09:13:53.904814] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:52.056 [2024-04-18 09:13:53.904908] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:52.056 [2024-04-18 09:13:53.904965] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:52.056 [2024-04-18 09:13:53.905023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.056 [2024-04-18 09:13:53.905093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:52.056 [2024-04-18 09:13:53.905137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.193 ms 00:44:52.056 [2024-04-18 09:13:53.905172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.056 [2024-04-18 09:13:53.932510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.056 [2024-04-18 09:13:53.932778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:52.056 [2024-04-18 09:13:53.932884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.212 ms 00:44:52.056 [2024-04-18 09:13:53.932977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.056 [2024-04-18 09:13:53.933204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.056 [2024-04-18 09:13:53.933320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:52.056 [2024-04-18 09:13:53.933427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:44:52.056 [2024-04-18 09:13:53.933514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.056 [2024-04-18 09:13:54.002825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.056 [2024-04-18 09:13:54.003072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:52.056 [2024-04-18 09:13:54.003171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.232 ms 00:44:52.056 [2024-04-18 09:13:54.003288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.056 [2024-04-18 09:13:54.003471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.056 [2024-04-18 09:13:54.003526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:52.056 [2024-04-18 09:13:54.003634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:52.056 [2024-04-18 09:13:54.003683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.056 [2024-04-18 09:13:54.004235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.056 [2024-04-18 09:13:54.004369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:52.056 [2024-04-18 09:13:54.004503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:44:52.056 [2024-04-18 09:13:54.004554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.056 [2024-04-18 09:13:54.004789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.056 [2024-04-18 09:13:54.004844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:52.056 [2024-04-18 09:13:54.004939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:44:52.056 [2024-04-18 09:13:54.005030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.056 [2024-04-18 09:13:54.031041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.056 [2024-04-18 09:13:54.031293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:52.056 [2024-04-18 09:13:54.031414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.932 ms 00:44:52.056 [2024-04-18 09:13:54.031495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.056 [2024-04-18 09:13:54.053557] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:44:52.056 [2024-04-18 09:13:54.053824] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:44:52.056 [2024-04-18 09:13:54.053945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.056 [2024-04-18 09:13:54.053984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:44:52.056 [2024-04-18 09:13:54.054059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.216 ms 00:44:52.056 [2024-04-18 09:13:54.054153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.056 [2024-04-18 09:13:54.090742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.056 [2024-04-18 09:13:54.091024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:44:52.056 [2024-04-18 09:13:54.091128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.418 ms 00:44:52.056 [2024-04-18 09:13:54.091234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.056 [2024-04-18 09:13:54.115396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.056 [2024-04-18 09:13:54.115669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:44:52.056 [2024-04-18 09:13:54.115766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.882 ms 00:44:52.056 [2024-04-18 09:13:54.115869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.056 [2024-04-18 09:13:54.140367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.056 [2024-04-18 09:13:54.140653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:44:52.056 [2024-04-18 09:13:54.140749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.312 ms 00:44:52.056 [2024-04-18 09:13:54.140838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.056 [2024-04-18 09:13:54.141569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.056 [2024-04-18 09:13:54.141713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:52.056 [2024-04-18 09:13:54.141802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:44:52.056 [2024-04-18 09:13:54.141843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.314 [2024-04-18 09:13:54.254647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.314 [2024-04-18 09:13:54.254904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:44:52.314 [2024-04-18 09:13:54.255009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.698 ms 00:44:52.314 [2024-04-18 09:13:54.255123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.314 [2024-04-18 09:13:54.273000] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:44:52.314 [2024-04-18 09:13:54.291727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.314 [2024-04-18 09:13:54.292027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:52.314 [2024-04-18 09:13:54.292124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.409 ms 00:44:52.314 [2024-04-18 09:13:54.292212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.314 [2024-04-18 09:13:54.292388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.314 [2024-04-18 09:13:54.292437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:44:52.314 [2024-04-18 09:13:54.292522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:44:52.314 [2024-04-18 09:13:54.292598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.314 [2024-04-18 09:13:54.292699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.314 [2024-04-18 09:13:54.292739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:52.314 [2024-04-18 09:13:54.292825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:44:52.314 [2024-04-18 09:13:54.292924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.314 [2024-04-18 09:13:54.295412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.314 [2024-04-18 09:13:54.295538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:44:52.314 [2024-04-18 09:13:54.295622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.422 ms 00:44:52.314 [2024-04-18 09:13:54.295662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.314 [2024-04-18 09:13:54.295761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.314 [2024-04-18 09:13:54.295804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:52.314 [2024-04-18 09:13:54.295840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:52.314 [2024-04-18 09:13:54.295899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.314 [2024-04-18 09:13:54.295984] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:44:52.314 [2024-04-18 09:13:54.296074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.314 [2024-04-18 09:13:54.296115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:44:52.314 [2024-04-18 09:13:54.296207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:44:52.314 [2024-04-18 09:13:54.296247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.314 [2024-04-18 09:13:54.343929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.314 [2024-04-18 09:13:54.344222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:52.314 [2024-04-18 09:13:54.344332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.599 ms 00:44:52.314 [2024-04-18 09:13:54.344392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.314 [2024-04-18 09:13:54.344639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.314 [2024-04-18 09:13:54.344759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:52.314 [2024-04-18 09:13:54.344849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:44:52.314 [2024-04-18 09:13:54.344891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.314 [2024-04-18 09:13:54.346049] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:52.314 [2024-04-18 09:13:54.353316] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 499.474 ms, result 0 00:44:52.314 [2024-04-18 09:13:54.354228] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:52.314 [2024-04-18 09:13:54.376929] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:52.571  Copying: 4096/4096 [kB] (average 33 MBps)[2024-04-18 09:13:54.503531] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:52.571 [2024-04-18 09:13:54.521790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.571 [2024-04-18 09:13:54.522082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:52.571 [2024-04-18 09:13:54.522201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:52.571 [2024-04-18 09:13:54.522246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.571 [2024-04-18 09:13:54.522352] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:44:52.571 [2024-04-18 09:13:54.526801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.571 [2024-04-18 09:13:54.527020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:52.571 [2024-04-18 09:13:54.527139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.274 ms 00:44:52.571 [2024-04-18 09:13:54.527189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.571 [2024-04-18 09:13:54.528888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.571 [2024-04-18 09:13:54.529042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:52.571 [2024-04-18 09:13:54.529129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:44:52.571 [2024-04-18 09:13:54.529177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.571 [2024-04-18 09:13:54.532996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.571 [2024-04-18 09:13:54.533156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:52.571 [2024-04-18 09:13:54.533244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.721 ms 00:44:52.571 [2024-04-18 09:13:54.533293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.571 [2024-04-18 09:13:54.540552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.571 [2024-04-18 09:13:54.540703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:44:52.571 [2024-04-18 09:13:54.540785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.191 ms 00:44:52.571 [2024-04-18 09:13:54.540826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.571 [2024-04-18 09:13:54.588680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.571 [2024-04-18 09:13:54.588932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:52.571 [2024-04-18 09:13:54.589033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.769 ms 00:44:52.571 [2024-04-18 09:13:54.589149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.571 [2024-04-18 09:13:54.616676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.571 [2024-04-18 09:13:54.616909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:52.571 [2024-04-18 09:13:54.617007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.388 ms 00:44:52.571 [2024-04-18 09:13:54.617052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.571 [2024-04-18 09:13:54.617419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.571 [2024-04-18 09:13:54.617569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:52.571 [2024-04-18 09:13:54.617657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:44:52.571 [2024-04-18 09:13:54.617733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.571 [2024-04-18 09:13:54.664230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.571 [2024-04-18 09:13:54.664450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:44:52.571 [2024-04-18 09:13:54.664549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.434 ms 00:44:52.571 [2024-04-18 09:13:54.664630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.830 [2024-04-18 09:13:54.714012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.830 [2024-04-18 09:13:54.714240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:44:52.830 [2024-04-18 09:13:54.714332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.245 ms 00:44:52.830 [2024-04-18 09:13:54.714439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.830 [2024-04-18 09:13:54.763239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.830 [2024-04-18 09:13:54.763519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:52.830 [2024-04-18 09:13:54.763619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.643 ms 00:44:52.830 [2024-04-18 09:13:54.763666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.830 [2024-04-18 09:13:54.812696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.830 [2024-04-18 09:13:54.812959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:52.830 [2024-04-18 09:13:54.813070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.800 ms 00:44:52.830 [2024-04-18 09:13:54.813119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.830 [2024-04-18 09:13:54.813310] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:52.830 [2024-04-18 09:13:54.813467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.813595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.813785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.813973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.814135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.814257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.814384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.814506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.814615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.814683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.814770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.814901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.815069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.815176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.815276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.815342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.815451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.815516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.815627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.815689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.815782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.815939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.816112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.816239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.816342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.816457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.816560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.816671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.816769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.816867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.817000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.817108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.817209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.817306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.817449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.817619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.817712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.817775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.817834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.817927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.817990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.818098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.818217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.818286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.818364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.818478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.818620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.818728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.818832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.818900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.818976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.819076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.819217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.819327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:52.830 [2024-04-18 09:13:54.819434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.819535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.819659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.819767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.819871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.820018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.820181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.820342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.820509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.820613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.820750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.820891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.821031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.821173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.821316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.821465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.821613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.821767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.821912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.822016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.822155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.822272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.822366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.822475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.822578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.822677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.822797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.822865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.822933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.823085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.823201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.823346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.823522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.823632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.823732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.823829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.824000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.824141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.824286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.824412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.824535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.824636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.824837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.824937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.825034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.825157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:52.831 [2024-04-18 09:13:54.825328] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:52.831 [2024-04-18 09:13:54.825471] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4d4e12f0-6a8e-4451-b5ee-b80718c55513 00:44:52.831 [2024-04-18 09:13:54.825583] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:52.831 [2024-04-18 09:13:54.825625] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:52.831 [2024-04-18 09:13:54.825717] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:52.831 [2024-04-18 09:13:54.825761] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:52.831 [2024-04-18 09:13:54.825796] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:52.831 [2024-04-18 09:13:54.825847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:52.831 [2024-04-18 09:13:54.825881] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:52.831 [2024-04-18 09:13:54.825954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:52.831 [2024-04-18 09:13:54.826031] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:52.831 [2024-04-18 09:13:54.826083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.831 [2024-04-18 09:13:54.826165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:52.831 [2024-04-18 09:13:54.826295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.771 ms 00:44:52.831 [2024-04-18 09:13:54.826441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.831 [2024-04-18 09:13:54.850213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.831 [2024-04-18 09:13:54.850476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:52.831 [2024-04-18 09:13:54.850576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.524 ms 00:44:52.831 [2024-04-18 09:13:54.850628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.831 [2024-04-18 09:13:54.851075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.831 [2024-04-18 09:13:54.851192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:52.831 [2024-04-18 09:13:54.851292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:44:52.831 [2024-04-18 09:13:54.851335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.831 [2024-04-18 09:13:54.919876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:52.831 [2024-04-18 09:13:54.920154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:52.831 [2024-04-18 09:13:54.920249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:52.831 [2024-04-18 09:13:54.920336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.831 [2024-04-18 09:13:54.920503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:52.831 [2024-04-18 09:13:54.920566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:52.831 [2024-04-18 09:13:54.920665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:52.831 [2024-04-18 09:13:54.920708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.831 [2024-04-18 09:13:54.920864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:52.831 [2024-04-18 09:13:54.920987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:52.831 [2024-04-18 09:13:54.921075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:52.831 [2024-04-18 09:13:54.921118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.831 [2024-04-18 09:13:54.921214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:52.831 [2024-04-18 09:13:54.921258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:52.831 [2024-04-18 09:13:54.921297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:52.831 [2024-04-18 09:13:54.921331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.089 [2024-04-18 09:13:55.062282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:53.089 [2024-04-18 09:13:55.062548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:53.089 [2024-04-18 09:13:55.062646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:53.089 [2024-04-18 09:13:55.062739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.089 [2024-04-18 09:13:55.119622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:53.089 [2024-04-18 09:13:55.119889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:53.089 [2024-04-18 09:13:55.120039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:53.089 [2024-04-18 09:13:55.120094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.089 [2024-04-18 09:13:55.120255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:53.089 [2024-04-18 09:13:55.120342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:53.089 [2024-04-18 09:13:55.120449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:53.089 [2024-04-18 09:13:55.120550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.089 [2024-04-18 09:13:55.120629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:53.089 [2024-04-18 09:13:55.120690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:53.089 [2024-04-18 09:13:55.120771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:53.089 [2024-04-18 09:13:55.120813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.089 [2024-04-18 09:13:55.121006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:53.089 [2024-04-18 09:13:55.121105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:53.089 [2024-04-18 09:13:55.121185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:53.089 [2024-04-18 09:13:55.121224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.089 [2024-04-18 09:13:55.121338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:53.089 [2024-04-18 09:13:55.121406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:53.089 [2024-04-18 09:13:55.121540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:53.089 [2024-04-18 09:13:55.121590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.089 [2024-04-18 09:13:55.121691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:53.089 [2024-04-18 09:13:55.121736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:53.089 [2024-04-18 09:13:55.121773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:53.089 [2024-04-18 09:13:55.121902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.089 [2024-04-18 09:13:55.121991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:53.089 [2024-04-18 09:13:55.122033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:53.089 [2024-04-18 09:13:55.122069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:53.089 [2024-04-18 09:13:55.122104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.089 [2024-04-18 09:13:55.122290] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 600.501 ms, result 0 00:44:55.049 00:44:55.049 00:44:55.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:55.049 09:13:56 -- ftl/trim.sh@93 -- # svcpid=79428 00:44:55.049 09:13:56 -- ftl/trim.sh@94 -- # waitforlisten 79428 00:44:55.049 09:13:56 -- common/autotest_common.sh@817 -- # '[' -z 79428 ']' 00:44:55.049 09:13:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:55.049 09:13:56 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:44:55.049 09:13:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:44:55.049 09:13:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:55.049 09:13:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:44:55.049 09:13:56 -- common/autotest_common.sh@10 -- # set +x 00:44:55.049 [2024-04-18 09:13:56.900167] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:44:55.049 [2024-04-18 09:13:56.900573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79428 ] 00:44:55.049 [2024-04-18 09:13:57.069609] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:55.308 [2024-04-18 09:13:57.342135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:56.681 09:13:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:44:56.681 09:13:58 -- common/autotest_common.sh@850 -- # return 0 00:44:56.681 09:13:58 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:44:56.681 [2024-04-18 09:13:58.718788] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:56.681 [2024-04-18 09:13:58.719049] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:56.939 [2024-04-18 09:13:58.899821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.939 [2024-04-18 09:13:58.900093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:56.939 [2024-04-18 09:13:58.900213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:44:56.939 [2024-04-18 09:13:58.900332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.939 [2024-04-18 09:13:58.904359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.939 [2024-04-18 09:13:58.904555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:56.939 [2024-04-18 09:13:58.904662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.933 ms 00:44:56.939 [2024-04-18 09:13:58.904710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.939 [2024-04-18 09:13:58.904996] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:56.939 [2024-04-18 09:13:58.906545] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:56.939 [2024-04-18 09:13:58.906718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.939 [2024-04-18 09:13:58.906866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:56.939 [2024-04-18 09:13:58.906921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.744 ms 00:44:56.939 [2024-04-18 09:13:58.906972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.939 [2024-04-18 09:13:58.908692] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:44:56.939 [2024-04-18 09:13:58.933789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.939 [2024-04-18 09:13:58.934046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:44:56.940 [2024-04-18 09:13:58.934145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.099 ms 00:44:56.940 [2024-04-18 09:13:58.934194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.940 [2024-04-18 09:13:58.934448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.940 [2024-04-18 09:13:58.934585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:44:56.940 [2024-04-18 09:13:58.934706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:44:56.940 [2024-04-18 09:13:58.934755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.940 [2024-04-18 09:13:58.942863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.940 [2024-04-18 09:13:58.943100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:56.940 [2024-04-18 09:13:58.943205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.974 ms 00:44:56.940 [2024-04-18 09:13:58.943253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.940 [2024-04-18 09:13:58.943491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.940 [2024-04-18 09:13:58.943554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:56.940 [2024-04-18 09:13:58.943667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:44:56.940 [2024-04-18 09:13:58.943714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.940 [2024-04-18 09:13:58.943800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.940 [2024-04-18 09:13:58.943843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:56.940 [2024-04-18 09:13:58.943880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:44:56.940 [2024-04-18 09:13:58.944041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.940 [2024-04-18 09:13:58.944163] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:44:56.940 [2024-04-18 09:13:58.951065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.940 [2024-04-18 09:13:58.951234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:56.940 [2024-04-18 09:13:58.951329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.900 ms 00:44:56.940 [2024-04-18 09:13:58.951437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.940 [2024-04-18 09:13:58.951578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.940 [2024-04-18 09:13:58.951631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:56.940 [2024-04-18 09:13:58.951747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:44:56.940 [2024-04-18 09:13:58.951792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.940 [2024-04-18 09:13:58.951894] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:44:56.940 [2024-04-18 09:13:58.952014] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:44:56.940 [2024-04-18 09:13:58.952172] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:44:56.940 [2024-04-18 09:13:58.952393] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:44:56.940 [2024-04-18 09:13:58.952640] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:44:56.940 [2024-04-18 09:13:58.952790] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:56.940 [2024-04-18 09:13:58.952940] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:44:56.940 [2024-04-18 09:13:58.953124] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:56.940 [2024-04-18 09:13:58.953306] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:56.940 [2024-04-18 09:13:58.953476] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:44:56.940 [2024-04-18 09:13:58.953623] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:56.940 [2024-04-18 09:13:58.953680] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:44:56.940 [2024-04-18 09:13:58.953750] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:44:56.940 [2024-04-18 09:13:58.953808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.940 [2024-04-18 09:13:58.953866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:56.940 [2024-04-18 09:13:58.953950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.923 ms 00:44:56.940 [2024-04-18 09:13:58.954021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.940 [2024-04-18 09:13:58.954152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.940 [2024-04-18 09:13:58.954215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:56.940 [2024-04-18 09:13:58.954363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:44:56.940 [2024-04-18 09:13:58.954433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.940 [2024-04-18 09:13:58.954595] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:56.940 [2024-04-18 09:13:58.954748] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:56.940 [2024-04-18 09:13:58.954889] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:56.940 [2024-04-18 09:13:58.955004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:56.940 [2024-04-18 09:13:58.955078] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:56.940 [2024-04-18 09:13:58.955158] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:56.940 [2024-04-18 09:13:58.955263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:44:56.940 [2024-04-18 09:13:58.955394] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:56.940 [2024-04-18 09:13:58.955509] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:44:56.940 [2024-04-18 09:13:58.955629] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:56.940 [2024-04-18 09:13:58.955688] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:56.940 [2024-04-18 09:13:58.955761] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:44:56.940 [2024-04-18 09:13:58.955870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:56.940 [2024-04-18 09:13:58.956002] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:56.940 [2024-04-18 09:13:58.956133] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:44:56.940 [2024-04-18 09:13:58.956244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:56.940 [2024-04-18 09:13:58.956301] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:56.940 [2024-04-18 09:13:58.956386] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:44:56.940 [2024-04-18 09:13:58.956491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:56.940 [2024-04-18 09:13:58.956615] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:44:56.940 [2024-04-18 09:13:58.956726] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:44:56.940 [2024-04-18 09:13:58.956846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:44:56.940 [2024-04-18 09:13:58.956900] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:56.940 [2024-04-18 09:13:58.956975] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:44:56.940 [2024-04-18 09:13:58.957080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:56.940 [2024-04-18 09:13:58.957197] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:56.940 [2024-04-18 09:13:58.957254] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:44:56.940 [2024-04-18 09:13:58.957384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:56.940 [2024-04-18 09:13:58.957437] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:56.940 [2024-04-18 09:13:58.957522] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:44:56.940 [2024-04-18 09:13:58.957583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:56.940 [2024-04-18 09:13:58.957636] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:56.940 [2024-04-18 09:13:58.957724] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:44:56.940 [2024-04-18 09:13:58.957790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:44:56.940 [2024-04-18 09:13:58.957844] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:56.940 [2024-04-18 09:13:58.957931] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:44:56.940 [2024-04-18 09:13:58.957992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:56.940 [2024-04-18 09:13:58.958049] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:56.940 [2024-04-18 09:13:58.958128] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:44:56.940 [2024-04-18 09:13:58.958188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:56.940 [2024-04-18 09:13:58.958241] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:56.940 [2024-04-18 09:13:58.958331] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:56.940 [2024-04-18 09:13:58.958415] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:56.940 [2024-04-18 09:13:58.958471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:56.940 [2024-04-18 09:13:58.958611] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:56.940 [2024-04-18 09:13:58.958746] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:56.940 [2024-04-18 09:13:58.958855] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:56.940 [2024-04-18 09:13:58.958919] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:56.940 [2024-04-18 09:13:58.958980] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:56.940 [2024-04-18 09:13:58.959095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:56.940 [2024-04-18 09:13:58.959223] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:56.940 [2024-04-18 09:13:58.959413] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:56.940 [2024-04-18 09:13:58.959535] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:44:56.940 [2024-04-18 09:13:58.959681] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:44:56.940 [2024-04-18 09:13:58.959826] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:44:56.940 [2024-04-18 09:13:58.960006] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:44:56.940 [2024-04-18 09:13:58.960125] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:44:56.941 [2024-04-18 09:13:58.960326] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:44:56.941 [2024-04-18 09:13:58.960418] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:44:56.941 [2024-04-18 09:13:58.960535] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:44:56.941 [2024-04-18 09:13:58.960607] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:44:56.941 [2024-04-18 09:13:58.960734] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:44:56.941 [2024-04-18 09:13:58.960833] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:44:56.941 [2024-04-18 09:13:58.960946] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:44:56.941 [2024-04-18 09:13:58.961074] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:44:56.941 [2024-04-18 09:13:58.961251] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:56.941 [2024-04-18 09:13:58.961456] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:56.941 [2024-04-18 09:13:58.961652] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:56.941 [2024-04-18 09:13:58.961826] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:56.941 [2024-04-18 09:13:58.961941] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:56.941 [2024-04-18 09:13:58.962013] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:56.941 [2024-04-18 09:13:58.962090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.941 [2024-04-18 09:13:58.962139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:56.941 [2024-04-18 09:13:58.962185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.540 ms 00:44:56.941 [2024-04-18 09:13:58.962226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.941 [2024-04-18 09:13:58.991613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.941 [2024-04-18 09:13:58.991876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:56.941 [2024-04-18 09:13:58.991991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.184 ms 00:44:56.941 [2024-04-18 09:13:58.992095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.941 [2024-04-18 09:13:58.992322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.941 [2024-04-18 09:13:58.992391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:56.941 [2024-04-18 09:13:58.992508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:44:56.941 [2024-04-18 09:13:58.992558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.198 [2024-04-18 09:13:59.057800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.198 [2024-04-18 09:13:59.058073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:57.198 [2024-04-18 09:13:59.058201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.156 ms 00:44:57.198 [2024-04-18 09:13:59.058298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.198 [2024-04-18 09:13:59.058468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.198 [2024-04-18 09:13:59.058526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:57.198 [2024-04-18 09:13:59.058641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:57.198 [2024-04-18 09:13:59.058686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.198 [2024-04-18 09:13:59.059244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.198 [2024-04-18 09:13:59.059363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:57.198 [2024-04-18 09:13:59.059485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:44:57.198 [2024-04-18 09:13:59.059532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.198 [2024-04-18 09:13:59.059749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.198 [2024-04-18 09:13:59.059856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:57.198 [2024-04-18 09:13:59.059975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:44:57.198 [2024-04-18 09:13:59.060090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.198 [2024-04-18 09:13:59.089111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.198 [2024-04-18 09:13:59.089396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:57.198 [2024-04-18 09:13:59.089531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.891 ms 00:44:57.198 [2024-04-18 09:13:59.089628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.198 [2024-04-18 09:13:59.115097] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:44:57.198 [2024-04-18 09:13:59.115443] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:44:57.198 [2024-04-18 09:13:59.115581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.198 [2024-04-18 09:13:59.115622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:44:57.198 [2024-04-18 09:13:59.115724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.726 ms 00:44:57.198 [2024-04-18 09:13:59.115799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.198 [2024-04-18 09:13:59.153911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.198 [2024-04-18 09:13:59.154224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:44:57.198 [2024-04-18 09:13:59.154331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.866 ms 00:44:57.198 [2024-04-18 09:13:59.154388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.198 [2024-04-18 09:13:59.179553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.198 [2024-04-18 09:13:59.179820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:44:57.198 [2024-04-18 09:13:59.179941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.919 ms 00:44:57.198 [2024-04-18 09:13:59.180007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.198 [2024-04-18 09:13:59.205143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.198 [2024-04-18 09:13:59.205405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:44:57.199 [2024-04-18 09:13:59.205515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.899 ms 00:44:57.199 [2024-04-18 09:13:59.205602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.199 [2024-04-18 09:13:59.206318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.199 [2024-04-18 09:13:59.206480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:57.199 [2024-04-18 09:13:59.206584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:44:57.199 [2024-04-18 09:13:59.206670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.456 [2024-04-18 09:13:59.323268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.456 [2024-04-18 09:13:59.323552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:44:57.456 [2024-04-18 09:13:59.323662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.509 ms 00:44:57.456 [2024-04-18 09:13:59.323754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.456 [2024-04-18 09:13:59.341687] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:44:57.456 [2024-04-18 09:13:59.361390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.456 [2024-04-18 09:13:59.361693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:57.456 [2024-04-18 09:13:59.361807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.445 ms 00:44:57.456 [2024-04-18 09:13:59.361924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.456 [2024-04-18 09:13:59.362086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.456 [2024-04-18 09:13:59.362142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:44:57.456 [2024-04-18 09:13:59.362247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:44:57.456 [2024-04-18 09:13:59.362353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.456 [2024-04-18 09:13:59.362473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.456 [2024-04-18 09:13:59.362532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:57.456 [2024-04-18 09:13:59.362633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:44:57.456 [2024-04-18 09:13:59.362755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.456 [2024-04-18 09:13:59.365421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.456 [2024-04-18 09:13:59.365587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:44:57.456 [2024-04-18 09:13:59.365710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.502 ms 00:44:57.456 [2024-04-18 09:13:59.365771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.456 [2024-04-18 09:13:59.365870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.456 [2024-04-18 09:13:59.365992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:57.456 [2024-04-18 09:13:59.366050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:57.456 [2024-04-18 09:13:59.366127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.456 [2024-04-18 09:13:59.366303] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:44:57.456 [2024-04-18 09:13:59.366450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.456 [2024-04-18 09:13:59.366564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:44:57.456 [2024-04-18 09:13:59.366685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:44:57.456 [2024-04-18 09:13:59.366743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.456 [2024-04-18 09:13:59.415420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.456 [2024-04-18 09:13:59.415799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:57.456 [2024-04-18 09:13:59.415973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.569 ms 00:44:57.456 [2024-04-18 09:13:59.416034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.456 [2024-04-18 09:13:59.416304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.456 [2024-04-18 09:13:59.416458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:57.456 [2024-04-18 09:13:59.416564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:44:57.457 [2024-04-18 09:13:59.416612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.457 [2024-04-18 09:13:59.417775] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:57.457 [2024-04-18 09:13:59.425219] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 517.616 ms, result 0 00:44:57.457 [2024-04-18 09:13:59.426404] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:57.457 Some configs were skipped because the RPC state that can call them passed over. 00:44:57.457 09:13:59 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:44:57.714 [2024-04-18 09:13:59.730602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.714 [2024-04-18 09:13:59.730900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:44:57.714 [2024-04-18 09:13:59.731038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.991 ms 00:44:57.714 [2024-04-18 09:13:59.731113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.715 [2024-04-18 09:13:59.731261] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 49.650 ms, result 0 00:44:57.715 true 00:44:57.715 09:13:59 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:44:57.972 [2024-04-18 09:13:59.996747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.972 [2024-04-18 09:13:59.997045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:44:57.972 [2024-04-18 09:13:59.997181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.766 ms 00:44:57.972 [2024-04-18 09:13:59.997231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.972 [2024-04-18 09:13:59.997423] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 50.429 ms, result 0 00:44:57.972 true 00:44:57.972 09:14:00 -- ftl/trim.sh@102 -- # killprocess 79428 00:44:57.972 09:14:00 -- common/autotest_common.sh@936 -- # '[' -z 79428 ']' 00:44:57.972 09:14:00 -- common/autotest_common.sh@940 -- # kill -0 79428 00:44:57.972 09:14:00 -- common/autotest_common.sh@941 -- # uname 00:44:57.972 09:14:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:44:57.972 09:14:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79428 00:44:57.972 killing process with pid 79428 00:44:57.972 09:14:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:44:57.972 09:14:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:44:57.972 09:14:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79428' 00:44:57.972 09:14:00 -- common/autotest_common.sh@955 -- # kill 79428 00:44:57.972 09:14:00 -- common/autotest_common.sh@960 -- # wait 79428 00:44:59.344 [2024-04-18 09:14:01.399975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.344 [2024-04-18 09:14:01.400284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:59.344 [2024-04-18 09:14:01.400410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:59.344 [2024-04-18 09:14:01.400463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.344 [2024-04-18 09:14:01.400589] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:44:59.344 [2024-04-18 09:14:01.404908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.344 [2024-04-18 09:14:01.405136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:59.344 [2024-04-18 09:14:01.405246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.233 ms 00:44:59.344 [2024-04-18 09:14:01.405295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.344 [2024-04-18 09:14:01.405679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.344 [2024-04-18 09:14:01.405744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:59.344 [2024-04-18 09:14:01.405962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:44:59.344 [2024-04-18 09:14:01.406022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.344 [2024-04-18 09:14:01.409833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.344 [2024-04-18 09:14:01.410018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:59.344 [2024-04-18 09:14:01.410130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.750 ms 00:44:59.344 [2024-04-18 09:14:01.410176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.344 [2024-04-18 09:14:01.417257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.344 [2024-04-18 09:14:01.417466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:44:59.344 [2024-04-18 09:14:01.417600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.995 ms 00:44:59.344 [2024-04-18 09:14:01.417647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.344 [2024-04-18 09:14:01.436881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.344 [2024-04-18 09:14:01.437152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:59.344 [2024-04-18 09:14:01.437255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.017 ms 00:44:59.344 [2024-04-18 09:14:01.437298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.602 [2024-04-18 09:14:01.449864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.602 [2024-04-18 09:14:01.450121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:59.602 [2024-04-18 09:14:01.450244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.382 ms 00:44:59.602 [2024-04-18 09:14:01.450290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.602 [2024-04-18 09:14:01.450576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.602 [2024-04-18 09:14:01.450714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:59.602 [2024-04-18 09:14:01.450820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:44:59.602 [2024-04-18 09:14:01.450919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.602 [2024-04-18 09:14:01.470219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.602 [2024-04-18 09:14:01.470505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:44:59.602 [2024-04-18 09:14:01.470624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.214 ms 00:44:59.602 [2024-04-18 09:14:01.470667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.602 [2024-04-18 09:14:01.489710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.602 [2024-04-18 09:14:01.489995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:44:59.602 [2024-04-18 09:14:01.490114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.855 ms 00:44:59.602 [2024-04-18 09:14:01.490159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.602 [2024-04-18 09:14:01.508793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.602 [2024-04-18 09:14:01.509050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:59.603 [2024-04-18 09:14:01.509155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.501 ms 00:44:59.603 [2024-04-18 09:14:01.509197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.603 [2024-04-18 09:14:01.528237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.603 [2024-04-18 09:14:01.528556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:59.603 [2024-04-18 09:14:01.528701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.775 ms 00:44:59.603 [2024-04-18 09:14:01.528746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.603 [2024-04-18 09:14:01.528876] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:59.603 [2024-04-18 09:14:01.528943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.529037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.529101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.529165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.529222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.529433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.529496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.529563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.529627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.529799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.529862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.529937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.529998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.530148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.530217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.530280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.530407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.530475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.530587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.530711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.530783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.530888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.530997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.531078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.531188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.531383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.531452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.531515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.531628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.531694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.531756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.531889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.531977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.532094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.532244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.532310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.532382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.532538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.532602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.532671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.532735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.532874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.532939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.533002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.533062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.533199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.533268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.533345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.533478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.533545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.533655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.533835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.533907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.534024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.534093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.534158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.534262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.534335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.534412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.534519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.534585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.534661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.534776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.534847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.534968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.535045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.535157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.535230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.535294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.535456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.535562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.535634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.535840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.535922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.536037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.536110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.536171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.536275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.536338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.536416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.536555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.536629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.536731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:59.603 [2024-04-18 09:14:01.536857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:59.604 [2024-04-18 09:14:01.536961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:59.604 [2024-04-18 09:14:01.537033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:59.604 [2024-04-18 09:14:01.537204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:59.604 [2024-04-18 09:14:01.537267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:59.604 [2024-04-18 09:14:01.537329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:59.604 [2024-04-18 09:14:01.537414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:59.604 [2024-04-18 09:14:01.537553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:59.604 [2024-04-18 09:14:01.537628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:59.604 [2024-04-18 09:14:01.537692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:59.604 [2024-04-18 09:14:01.537834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:59.604 [2024-04-18 09:14:01.537906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:59.604 [2024-04-18 09:14:01.537968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:59.604 [2024-04-18 09:14:01.538070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:59.604 [2024-04-18 09:14:01.538141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:59.604 [2024-04-18 09:14:01.538206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:59.604 [2024-04-18 09:14:01.538269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:59.604 [2024-04-18 09:14:01.538342] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:59.604 [2024-04-18 09:14:01.538542] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4d4e12f0-6a8e-4451-b5ee-b80718c55513 00:44:59.604 [2024-04-18 09:14:01.538615] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:59.604 [2024-04-18 09:14:01.538656] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:59.604 [2024-04-18 09:14:01.538691] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:59.604 [2024-04-18 09:14:01.538785] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:59.604 [2024-04-18 09:14:01.538831] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:59.604 [2024-04-18 09:14:01.538877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:59.604 [2024-04-18 09:14:01.538913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:59.604 [2024-04-18 09:14:01.538950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:59.604 [2024-04-18 09:14:01.539032] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:59.604 [2024-04-18 09:14:01.539074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.604 [2024-04-18 09:14:01.539110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:59.604 [2024-04-18 09:14:01.539152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.201 ms 00:44:59.604 [2024-04-18 09:14:01.539230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.604 [2024-04-18 09:14:01.563743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.604 [2024-04-18 09:14:01.563998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:59.604 [2024-04-18 09:14:01.564113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.373 ms 00:44:59.604 [2024-04-18 09:14:01.564162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.604 [2024-04-18 09:14:01.564597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.604 [2024-04-18 09:14:01.564726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:59.604 [2024-04-18 09:14:01.564830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:44:59.604 [2024-04-18 09:14:01.564874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.604 [2024-04-18 09:14:01.649047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:59.604 [2024-04-18 09:14:01.649296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:59.604 [2024-04-18 09:14:01.649424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:59.604 [2024-04-18 09:14:01.649517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.604 [2024-04-18 09:14:01.649698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:59.604 [2024-04-18 09:14:01.649765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:59.604 [2024-04-18 09:14:01.649870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:59.604 [2024-04-18 09:14:01.649958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.604 [2024-04-18 09:14:01.650076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:59.604 [2024-04-18 09:14:01.650127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:59.604 [2024-04-18 09:14:01.650219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:59.604 [2024-04-18 09:14:01.650306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.604 [2024-04-18 09:14:01.650369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:59.604 [2024-04-18 09:14:01.650422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:59.604 [2024-04-18 09:14:01.650511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:59.604 [2024-04-18 09:14:01.650558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.862 [2024-04-18 09:14:01.801657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:59.862 [2024-04-18 09:14:01.801905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:59.862 [2024-04-18 09:14:01.802011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:59.862 [2024-04-18 09:14:01.802113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.862 [2024-04-18 09:14:01.857805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:59.862 [2024-04-18 09:14:01.858024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:59.862 [2024-04-18 09:14:01.858140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:59.862 [2024-04-18 09:14:01.858225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.862 [2024-04-18 09:14:01.858366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:59.862 [2024-04-18 09:14:01.858438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:59.862 [2024-04-18 09:14:01.858557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:59.862 [2024-04-18 09:14:01.858601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.862 [2024-04-18 09:14:01.858770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:59.862 [2024-04-18 09:14:01.858901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:59.862 [2024-04-18 09:14:01.859004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:59.862 [2024-04-18 09:14:01.859051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.862 [2024-04-18 09:14:01.859273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:59.862 [2024-04-18 09:14:01.859335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:59.862 [2024-04-18 09:14:01.859485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:59.862 [2024-04-18 09:14:01.859543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.863 [2024-04-18 09:14:01.859658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:59.863 [2024-04-18 09:14:01.859766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:59.863 [2024-04-18 09:14:01.859822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:59.863 [2024-04-18 09:14:01.859900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.863 [2024-04-18 09:14:01.860052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:59.863 [2024-04-18 09:14:01.860150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:59.863 [2024-04-18 09:14:01.860250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:59.863 [2024-04-18 09:14:01.860295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.863 [2024-04-18 09:14:01.860459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:59.863 [2024-04-18 09:14:01.860559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:59.863 [2024-04-18 09:14:01.860651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:59.863 [2024-04-18 09:14:01.860762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.863 [2024-04-18 09:14:01.860990] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 460.979 ms, result 0 00:45:01.761 09:14:03 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:01.761 [2024-04-18 09:14:03.521353] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:45:01.762 [2024-04-18 09:14:03.521730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79508 ] 00:45:01.762 [2024-04-18 09:14:03.685868] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:02.020 [2024-04-18 09:14:03.971504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:02.586 [2024-04-18 09:14:04.446420] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:02.586 [2024-04-18 09:14:04.446721] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:02.586 [2024-04-18 09:14:04.609823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.586 [2024-04-18 09:14:04.610101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:02.586 [2024-04-18 09:14:04.610209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:45:02.586 [2024-04-18 09:14:04.610260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.586 [2024-04-18 09:14:04.614118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.586 [2024-04-18 09:14:04.614297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:02.586 [2024-04-18 09:14:04.614423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.788 ms 00:45:02.586 [2024-04-18 09:14:04.614469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.586 [2024-04-18 09:14:04.614797] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:02.587 [2024-04-18 09:14:04.616307] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:02.587 [2024-04-18 09:14:04.616500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.587 [2024-04-18 09:14:04.616595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:02.587 [2024-04-18 09:14:04.616644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.712 ms 00:45:02.587 [2024-04-18 09:14:04.616680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.587 [2024-04-18 09:14:04.618448] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:45:02.587 [2024-04-18 09:14:04.643840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.587 [2024-04-18 09:14:04.644120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:45:02.587 [2024-04-18 09:14:04.644225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.389 ms 00:45:02.587 [2024-04-18 09:14:04.644269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.587 [2024-04-18 09:14:04.644483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.587 [2024-04-18 09:14:04.644565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:45:02.587 [2024-04-18 09:14:04.644664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:45:02.587 [2024-04-18 09:14:04.644705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.587 [2024-04-18 09:14:04.652584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.587 [2024-04-18 09:14:04.652777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:02.587 [2024-04-18 09:14:04.652942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.789 ms 00:45:02.587 [2024-04-18 09:14:04.652987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.587 [2024-04-18 09:14:04.653186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.587 [2024-04-18 09:14:04.653258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:02.587 [2024-04-18 09:14:04.653303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:45:02.587 [2024-04-18 09:14:04.653337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.587 [2024-04-18 09:14:04.653410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.587 [2024-04-18 09:14:04.653450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:02.587 [2024-04-18 09:14:04.653485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:45:02.587 [2024-04-18 09:14:04.653519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.587 [2024-04-18 09:14:04.653666] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:45:02.587 [2024-04-18 09:14:04.660657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.587 [2024-04-18 09:14:04.660834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:02.587 [2024-04-18 09:14:04.660951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.001 ms 00:45:02.587 [2024-04-18 09:14:04.660993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.587 [2024-04-18 09:14:04.661119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.587 [2024-04-18 09:14:04.661172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:02.587 [2024-04-18 09:14:04.661211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:45:02.587 [2024-04-18 09:14:04.661293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.587 [2024-04-18 09:14:04.661355] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:45:02.587 [2024-04-18 09:14:04.661431] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:45:02.587 [2024-04-18 09:14:04.661607] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:45:02.587 [2024-04-18 09:14:04.661679] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:45:02.587 [2024-04-18 09:14:04.661808] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:45:02.587 [2024-04-18 09:14:04.661972] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:02.587 [2024-04-18 09:14:04.662032] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:45:02.587 [2024-04-18 09:14:04.662090] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:02.587 [2024-04-18 09:14:04.662148] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:02.587 [2024-04-18 09:14:04.662276] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:45:02.587 [2024-04-18 09:14:04.662320] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:02.587 [2024-04-18 09:14:04.662354] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:45:02.587 [2024-04-18 09:14:04.662406] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:45:02.587 [2024-04-18 09:14:04.662443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.587 [2024-04-18 09:14:04.662531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:02.587 [2024-04-18 09:14:04.662578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.091 ms 00:45:02.587 [2024-04-18 09:14:04.662617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.587 [2024-04-18 09:14:04.662720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.587 [2024-04-18 09:14:04.662806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:02.587 [2024-04-18 09:14:04.662844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:45:02.587 [2024-04-18 09:14:04.662878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.587 [2024-04-18 09:14:04.663029] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:02.587 [2024-04-18 09:14:04.663134] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:02.587 [2024-04-18 09:14:04.663178] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:02.587 [2024-04-18 09:14:04.663257] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:02.587 [2024-04-18 09:14:04.663301] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:02.587 [2024-04-18 09:14:04.663404] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:02.587 [2024-04-18 09:14:04.663447] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:45:02.587 [2024-04-18 09:14:04.663518] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:02.587 [2024-04-18 09:14:04.663556] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:45:02.587 [2024-04-18 09:14:04.663590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:02.587 [2024-04-18 09:14:04.663705] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:02.587 [2024-04-18 09:14:04.663718] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:45:02.587 [2024-04-18 09:14:04.663729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:02.587 [2024-04-18 09:14:04.663740] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:02.587 [2024-04-18 09:14:04.663751] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:45:02.587 [2024-04-18 09:14:04.663762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:02.587 [2024-04-18 09:14:04.663773] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:02.587 [2024-04-18 09:14:04.663784] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:45:02.587 [2024-04-18 09:14:04.663795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:02.587 [2024-04-18 09:14:04.663806] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:45:02.587 [2024-04-18 09:14:04.663817] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:45:02.587 [2024-04-18 09:14:04.663827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:45:02.587 [2024-04-18 09:14:04.663838] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:02.587 [2024-04-18 09:14:04.663849] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:45:02.587 [2024-04-18 09:14:04.663860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:45:02.587 [2024-04-18 09:14:04.663871] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:02.587 [2024-04-18 09:14:04.663882] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:45:02.587 [2024-04-18 09:14:04.663893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:45:02.587 [2024-04-18 09:14:04.663903] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:02.587 [2024-04-18 09:14:04.663925] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:45:02.587 [2024-04-18 09:14:04.663936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:45:02.587 [2024-04-18 09:14:04.663946] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:02.587 [2024-04-18 09:14:04.663957] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:45:02.587 [2024-04-18 09:14:04.663985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:45:02.587 [2024-04-18 09:14:04.663996] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:02.587 [2024-04-18 09:14:04.664008] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:45:02.587 [2024-04-18 09:14:04.664019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:02.587 [2024-04-18 09:14:04.664030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:02.587 [2024-04-18 09:14:04.664041] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:45:02.587 [2024-04-18 09:14:04.664052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:02.587 [2024-04-18 09:14:04.664063] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:02.587 [2024-04-18 09:14:04.664075] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:02.587 [2024-04-18 09:14:04.664086] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:02.587 [2024-04-18 09:14:04.664098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:02.587 [2024-04-18 09:14:04.664110] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:02.587 [2024-04-18 09:14:04.664121] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:02.587 [2024-04-18 09:14:04.664133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:02.587 [2024-04-18 09:14:04.664145] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:02.587 [2024-04-18 09:14:04.664156] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:02.588 [2024-04-18 09:14:04.664168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:02.588 [2024-04-18 09:14:04.664181] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:02.588 [2024-04-18 09:14:04.664203] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:02.588 [2024-04-18 09:14:04.664217] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:45:02.588 [2024-04-18 09:14:04.664234] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:45:02.588 [2024-04-18 09:14:04.664246] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:45:02.588 [2024-04-18 09:14:04.664258] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:45:02.588 [2024-04-18 09:14:04.664271] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:45:02.588 [2024-04-18 09:14:04.664283] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:45:02.588 [2024-04-18 09:14:04.664295] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:45:02.588 [2024-04-18 09:14:04.664308] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:45:02.588 [2024-04-18 09:14:04.664320] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:45:02.588 [2024-04-18 09:14:04.664333] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:45:02.588 [2024-04-18 09:14:04.664345] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:45:02.588 [2024-04-18 09:14:04.664358] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:45:02.588 [2024-04-18 09:14:04.664370] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:45:02.588 [2024-04-18 09:14:04.664383] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:02.588 [2024-04-18 09:14:04.664422] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:02.588 [2024-04-18 09:14:04.664436] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:02.588 [2024-04-18 09:14:04.664448] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:02.588 [2024-04-18 09:14:04.664461] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:02.588 [2024-04-18 09:14:04.664473] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:02.588 [2024-04-18 09:14:04.664487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.588 [2024-04-18 09:14:04.664500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:02.588 [2024-04-18 09:14:04.664517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.500 ms 00:45:02.588 [2024-04-18 09:14:04.664528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.847 [2024-04-18 09:14:04.693581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.847 [2024-04-18 09:14:04.693809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:02.847 [2024-04-18 09:14:04.694028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.973 ms 00:45:02.847 [2024-04-18 09:14:04.694070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.847 [2024-04-18 09:14:04.694268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.847 [2024-04-18 09:14:04.694314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:02.847 [2024-04-18 09:14:04.694420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:45:02.847 [2024-04-18 09:14:04.694458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.847 [2024-04-18 09:14:04.766075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.847 [2024-04-18 09:14:04.766343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:02.847 [2024-04-18 09:14:04.766459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.558 ms 00:45:02.847 [2024-04-18 09:14:04.766505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.847 [2024-04-18 09:14:04.766648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.847 [2024-04-18 09:14:04.766690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:02.847 [2024-04-18 09:14:04.766726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:45:02.847 [2024-04-18 09:14:04.766819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.847 [2024-04-18 09:14:04.767334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.847 [2024-04-18 09:14:04.767411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:02.847 [2024-04-18 09:14:04.767517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:45:02.847 [2024-04-18 09:14:04.767560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.847 [2024-04-18 09:14:04.767789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.847 [2024-04-18 09:14:04.767842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:02.847 [2024-04-18 09:14:04.767952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:45:02.847 [2024-04-18 09:14:04.768046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.847 [2024-04-18 09:14:04.795274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.847 [2024-04-18 09:14:04.795518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:02.847 [2024-04-18 09:14:04.795703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.159 ms 00:45:02.847 [2024-04-18 09:14:04.795746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.847 [2024-04-18 09:14:04.819797] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:45:02.847 [2024-04-18 09:14:04.820127] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:45:02.847 [2024-04-18 09:14:04.820306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.847 [2024-04-18 09:14:04.820346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:45:02.847 [2024-04-18 09:14:04.820410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.349 ms 00:45:02.847 [2024-04-18 09:14:04.820494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.847 [2024-04-18 09:14:04.856526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.847 [2024-04-18 09:14:04.856801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:45:02.847 [2024-04-18 09:14:04.857006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.867 ms 00:45:02.847 [2024-04-18 09:14:04.857065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.847 [2024-04-18 09:14:04.880889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.847 [2024-04-18 09:14:04.881185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:45:02.847 [2024-04-18 09:14:04.881303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.639 ms 00:45:02.847 [2024-04-18 09:14:04.881346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.847 [2024-04-18 09:14:04.905604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.847 [2024-04-18 09:14:04.905878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:45:02.847 [2024-04-18 09:14:04.905966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.073 ms 00:45:02.847 [2024-04-18 09:14:04.906007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.847 [2024-04-18 09:14:04.906738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.847 [2024-04-18 09:14:04.906872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:02.847 [2024-04-18 09:14:04.906961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:45:02.847 [2024-04-18 09:14:04.907002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:03.106 [2024-04-18 09:14:05.019696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:03.106 [2024-04-18 09:14:05.019966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:45:03.106 [2024-04-18 09:14:05.020068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.624 ms 00:45:03.106 [2024-04-18 09:14:05.020112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:03.106 [2024-04-18 09:14:05.037692] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:45:03.106 [2024-04-18 09:14:05.056638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:03.106 [2024-04-18 09:14:05.056931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:03.106 [2024-04-18 09:14:05.057086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.352 ms 00:45:03.106 [2024-04-18 09:14:05.057129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:03.106 [2024-04-18 09:14:05.057315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:03.106 [2024-04-18 09:14:05.057363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:45:03.106 [2024-04-18 09:14:05.057418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:45:03.106 [2024-04-18 09:14:05.057508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:03.106 [2024-04-18 09:14:05.057606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:03.106 [2024-04-18 09:14:05.057645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:03.106 [2024-04-18 09:14:05.057737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:45:03.106 [2024-04-18 09:14:05.057777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:03.106 [2024-04-18 09:14:05.060261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:03.106 [2024-04-18 09:14:05.060408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:45:03.106 [2024-04-18 09:14:05.060498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.430 ms 00:45:03.106 [2024-04-18 09:14:05.060540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:03.106 [2024-04-18 09:14:05.060604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:03.106 [2024-04-18 09:14:05.060675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:03.106 [2024-04-18 09:14:05.060741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:45:03.106 [2024-04-18 09:14:05.060786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:03.106 [2024-04-18 09:14:05.060854] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:45:03.106 [2024-04-18 09:14:05.060894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:03.106 [2024-04-18 09:14:05.060929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:45:03.106 [2024-04-18 09:14:05.061048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:45:03.106 [2024-04-18 09:14:05.061082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:03.106 [2024-04-18 09:14:05.107971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:03.106 [2024-04-18 09:14:05.108234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:03.106 [2024-04-18 09:14:05.108462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.832 ms 00:45:03.106 [2024-04-18 09:14:05.108507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:03.106 [2024-04-18 09:14:05.108753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:03.106 [2024-04-18 09:14:05.108859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:03.106 [2024-04-18 09:14:05.108960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:45:03.106 [2024-04-18 09:14:05.109001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:03.107 [2024-04-18 09:14:05.110169] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:03.107 [2024-04-18 09:14:05.117219] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 499.995 ms, result 0 00:45:03.107 [2024-04-18 09:14:05.118259] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:03.107 [2024-04-18 09:14:05.140456] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:11.303  Copying: 31/256 [MB] (31 MBps) Copying: 65/256 [MB] (33 MBps) Copying: 97/256 [MB] (32 MBps) Copying: 129/256 [MB] (32 MBps) Copying: 159/256 [MB] (29 MBps) Copying: 191/256 [MB] (32 MBps) Copying: 225/256 [MB] (33 MBps) Copying: 256/256 [MB] (average 32 MBps)[2024-04-18 09:14:13.290200] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:11.303 [2024-04-18 09:14:13.314568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.303 [2024-04-18 09:14:13.314812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:11.303 [2024-04-18 09:14:13.314946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:45:11.303 [2024-04-18 09:14:13.314992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.303 [2024-04-18 09:14:13.315121] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:45:11.303 [2024-04-18 09:14:13.320055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.303 [2024-04-18 09:14:13.320245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:11.303 [2024-04-18 09:14:13.320363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.799 ms 00:45:11.304 [2024-04-18 09:14:13.320488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.304 [2024-04-18 09:14:13.320828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.304 [2024-04-18 09:14:13.320950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:11.304 [2024-04-18 09:14:13.321044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:45:11.304 [2024-04-18 09:14:13.321151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.304 [2024-04-18 09:14:13.324779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.304 [2024-04-18 09:14:13.324931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:11.304 [2024-04-18 09:14:13.325020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.566 ms 00:45:11.304 [2024-04-18 09:14:13.325097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.304 [2024-04-18 09:14:13.332103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.304 [2024-04-18 09:14:13.332258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:45:11.304 [2024-04-18 09:14:13.332352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.911 ms 00:45:11.304 [2024-04-18 09:14:13.332413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.304 [2024-04-18 09:14:13.381258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.304 [2024-04-18 09:14:13.381541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:11.304 [2024-04-18 09:14:13.381639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.715 ms 00:45:11.304 [2024-04-18 09:14:13.381684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.562 [2024-04-18 09:14:13.409494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.562 [2024-04-18 09:14:13.409745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:11.562 [2024-04-18 09:14:13.409846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.647 ms 00:45:11.562 [2024-04-18 09:14:13.409890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.562 [2024-04-18 09:14:13.410141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.562 [2024-04-18 09:14:13.410276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:11.562 [2024-04-18 09:14:13.410357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:45:11.562 [2024-04-18 09:14:13.410426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.562 [2024-04-18 09:14:13.459602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.562 [2024-04-18 09:14:13.459883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:45:11.562 [2024-04-18 09:14:13.460020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.056 ms 00:45:11.562 [2024-04-18 09:14:13.460118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.562 [2024-04-18 09:14:13.509046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.562 [2024-04-18 09:14:13.509342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:45:11.562 [2024-04-18 09:14:13.509478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.804 ms 00:45:11.562 [2024-04-18 09:14:13.509584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.562 [2024-04-18 09:14:13.557840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.562 [2024-04-18 09:14:13.558102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:11.562 [2024-04-18 09:14:13.558212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.134 ms 00:45:11.562 [2024-04-18 09:14:13.558305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.562 [2024-04-18 09:14:13.606439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.562 [2024-04-18 09:14:13.606700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:11.562 [2024-04-18 09:14:13.606812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.943 ms 00:45:11.562 [2024-04-18 09:14:13.606911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.562 [2024-04-18 09:14:13.607025] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:11.562 [2024-04-18 09:14:13.607127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:45:11.562 [2024-04-18 09:14:13.607263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:45:11.562 [2024-04-18 09:14:13.607360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:11.562 [2024-04-18 09:14:13.607490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:11.562 [2024-04-18 09:14:13.607582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:11.562 [2024-04-18 09:14:13.607645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:11.562 [2024-04-18 09:14:13.607790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:11.562 [2024-04-18 09:14:13.607958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:11.562 [2024-04-18 09:14:13.608064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:11.562 [2024-04-18 09:14:13.608160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:11.562 [2024-04-18 09:14:13.608313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:11.562 [2024-04-18 09:14:13.608439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:11.562 [2024-04-18 09:14:13.608543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:11.562 [2024-04-18 09:14:13.608653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.608716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.608817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.608912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.609087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.609229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.609293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.609391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.609452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.609573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.609680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.609741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.609835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.609936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.610120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.610226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.610335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.610422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.610493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.610590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.610752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.610845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.610907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.610964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.611065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.611164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.611226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.611346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.611481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.611591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.611686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.611793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.611851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.611907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.612033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.612133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.612225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.612316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.612395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.612457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.612535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.612592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.612699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.612761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.612854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.612950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.613116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.613223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.613325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.613409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.613499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.613595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.613767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.613883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.613983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.614078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.614168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.614234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.614291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.614401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.614460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.614554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.614610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.614706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.614768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.614869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.614928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.615009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.615103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.615290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.615399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.615495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.615599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.615704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.615797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.615893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.616000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.616092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.616155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.616281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.616344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.616439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.616535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.616701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.616804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.616865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.616922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:11.563 [2024-04-18 09:14:13.617078] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:11.563 [2024-04-18 09:14:13.617224] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4d4e12f0-6a8e-4451-b5ee-b80718c55513 00:45:11.563 [2024-04-18 09:14:13.617337] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:45:11.563 [2024-04-18 09:14:13.617398] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:45:11.563 [2024-04-18 09:14:13.617478] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:45:11.564 [2024-04-18 09:14:13.617561] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:45:11.564 [2024-04-18 09:14:13.617602] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:11.564 [2024-04-18 09:14:13.617664] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:11.564 [2024-04-18 09:14:13.617705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:11.564 [2024-04-18 09:14:13.617739] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:11.564 [2024-04-18 09:14:13.617773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:11.564 [2024-04-18 09:14:13.617811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.564 [2024-04-18 09:14:13.617853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:11.564 [2024-04-18 09:14:13.617893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.786 ms 00:45:11.564 [2024-04-18 09:14:13.617929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.564 [2024-04-18 09:14:13.642423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.564 [2024-04-18 09:14:13.642665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:11.564 [2024-04-18 09:14:13.642777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.390 ms 00:45:11.564 [2024-04-18 09:14:13.642820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.564 [2024-04-18 09:14:13.643233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.564 [2024-04-18 09:14:13.643344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:11.564 [2024-04-18 09:14:13.643495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:45:11.564 [2024-04-18 09:14:13.643601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.822 [2024-04-18 09:14:13.714700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:11.822 [2024-04-18 09:14:13.714957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:11.822 [2024-04-18 09:14:13.715085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:11.822 [2024-04-18 09:14:13.715181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.822 [2024-04-18 09:14:13.715324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:11.822 [2024-04-18 09:14:13.715365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:11.822 [2024-04-18 09:14:13.715463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:11.822 [2024-04-18 09:14:13.715549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.822 [2024-04-18 09:14:13.715654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:11.822 [2024-04-18 09:14:13.715709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:11.822 [2024-04-18 09:14:13.715799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:11.822 [2024-04-18 09:14:13.715842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.822 [2024-04-18 09:14:13.715943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:11.822 [2024-04-18 09:14:13.715984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:11.822 [2024-04-18 09:14:13.716020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:11.822 [2024-04-18 09:14:13.716055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.822 [2024-04-18 09:14:13.856606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:11.822 [2024-04-18 09:14:13.856851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:11.822 [2024-04-18 09:14:13.856942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:11.822 [2024-04-18 09:14:13.857039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.822 [2024-04-18 09:14:13.913163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:11.822 [2024-04-18 09:14:13.913499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:11.822 [2024-04-18 09:14:13.913639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:11.822 [2024-04-18 09:14:13.913710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.822 [2024-04-18 09:14:13.913889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:11.822 [2024-04-18 09:14:13.914033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:11.822 [2024-04-18 09:14:13.914153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:11.822 [2024-04-18 09:14:13.914207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.822 [2024-04-18 09:14:13.914347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:11.822 [2024-04-18 09:14:13.914494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:11.822 [2024-04-18 09:14:13.914554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:11.822 [2024-04-18 09:14:13.914634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.822 [2024-04-18 09:14:13.914896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:11.822 [2024-04-18 09:14:13.915031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:11.822 [2024-04-18 09:14:13.915153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:11.822 [2024-04-18 09:14:13.915213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.822 [2024-04-18 09:14:13.915384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:11.822 [2024-04-18 09:14:13.915514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:11.822 [2024-04-18 09:14:13.915572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:11.822 [2024-04-18 09:14:13.915651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.822 [2024-04-18 09:14:13.915807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:11.822 [2024-04-18 09:14:13.915994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:11.822 [2024-04-18 09:14:13.916116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:11.822 [2024-04-18 09:14:13.916232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.822 [2024-04-18 09:14:13.916358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:11.822 [2024-04-18 09:14:13.916465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:11.822 [2024-04-18 09:14:13.916598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:11.822 [2024-04-18 09:14:13.916655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.822 [2024-04-18 09:14:13.916942] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 602.352 ms, result 0 00:45:13.744 00:45:13.744 00:45:13.744 09:14:15 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:45:14.001 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:45:14.001 09:14:16 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:45:14.001 09:14:16 -- ftl/trim.sh@109 -- # fio_kill 00:45:14.001 09:14:16 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:45:14.001 09:14:16 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:14.001 09:14:16 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:45:14.001 09:14:16 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:45:14.001 09:14:16 -- ftl/trim.sh@20 -- # killprocess 79428 00:45:14.001 09:14:16 -- common/autotest_common.sh@936 -- # '[' -z 79428 ']' 00:45:14.001 09:14:16 -- common/autotest_common.sh@940 -- # kill -0 79428 00:45:14.001 Process with pid 79428 is not found 00:45:14.001 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (79428) - No such process 00:45:14.001 09:14:16 -- common/autotest_common.sh@963 -- # echo 'Process with pid 79428 is not found' 00:45:14.001 ************************************ 00:45:14.001 END TEST ftl_trim 00:45:14.001 ************************************ 00:45:14.001 00:45:14.001 real 1m11.002s 00:45:14.001 user 1m39.277s 00:45:14.001 sys 0m7.340s 00:45:14.001 09:14:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:45:14.001 09:14:16 -- common/autotest_common.sh@10 -- # set +x 00:45:14.259 09:14:16 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:45:14.259 09:14:16 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:45:14.259 09:14:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:45:14.259 09:14:16 -- common/autotest_common.sh@10 -- # set +x 00:45:14.259 ************************************ 00:45:14.259 START TEST ftl_restore 00:45:14.259 ************************************ 00:45:14.259 09:14:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:45:14.259 * Looking for test storage... 00:45:14.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:45:14.259 09:14:16 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:45:14.259 09:14:16 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:45:14.259 09:14:16 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:45:14.259 09:14:16 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:45:14.259 09:14:16 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:45:14.259 09:14:16 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:45:14.259 09:14:16 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:14.259 09:14:16 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:45:14.259 09:14:16 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:45:14.259 09:14:16 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:14.259 09:14:16 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:14.259 09:14:16 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:45:14.259 09:14:16 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:45:14.259 09:14:16 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:45:14.259 09:14:16 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:45:14.259 09:14:16 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:45:14.259 09:14:16 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:45:14.259 09:14:16 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:14.259 09:14:16 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:14.259 09:14:16 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:45:14.259 09:14:16 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:45:14.259 09:14:16 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:45:14.259 09:14:16 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:45:14.259 09:14:16 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:45:14.259 09:14:16 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:45:14.259 09:14:16 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:45:14.259 09:14:16 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:45:14.259 09:14:16 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:14.259 09:14:16 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:14.259 09:14:16 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:14.259 09:14:16 -- ftl/restore.sh@13 -- # mktemp -d 00:45:14.259 09:14:16 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.PIZfEmLvbj 00:45:14.259 09:14:16 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:45:14.259 09:14:16 -- ftl/restore.sh@16 -- # case $opt in 00:45:14.259 09:14:16 -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:45:14.259 09:14:16 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:45:14.259 09:14:16 -- ftl/restore.sh@23 -- # shift 2 00:45:14.259 09:14:16 -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:45:14.259 09:14:16 -- ftl/restore.sh@25 -- # timeout=240 00:45:14.259 09:14:16 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:45:14.259 09:14:16 -- ftl/restore.sh@39 -- # svcpid=79704 00:45:14.259 09:14:16 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:14.259 09:14:16 -- ftl/restore.sh@41 -- # waitforlisten 79704 00:45:14.259 09:14:16 -- common/autotest_common.sh@817 -- # '[' -z 79704 ']' 00:45:14.259 09:14:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:14.259 09:14:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:45:14.259 09:14:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:14.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:14.259 09:14:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:45:14.259 09:14:16 -- common/autotest_common.sh@10 -- # set +x 00:45:14.517 [2024-04-18 09:14:16.449545] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:45:14.517 [2024-04-18 09:14:16.449869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79704 ] 00:45:14.774 [2024-04-18 09:14:16.621810] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:15.031 [2024-04-18 09:14:16.918458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:15.962 09:14:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:45:15.962 09:14:18 -- common/autotest_common.sh@850 -- # return 0 00:45:15.962 09:14:18 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:45:15.963 09:14:18 -- ftl/common.sh@54 -- # local name=nvme0 00:45:15.963 09:14:18 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:45:15.963 09:14:18 -- ftl/common.sh@56 -- # local size=103424 00:45:15.963 09:14:18 -- ftl/common.sh@59 -- # local base_bdev 00:45:15.963 09:14:18 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:45:16.528 09:14:18 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:45:16.528 09:14:18 -- ftl/common.sh@62 -- # local base_size 00:45:16.528 09:14:18 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:45:16.528 09:14:18 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:45:16.528 09:14:18 -- common/autotest_common.sh@1365 -- # local bdev_info 00:45:16.528 09:14:18 -- common/autotest_common.sh@1366 -- # local bs 00:45:16.528 09:14:18 -- common/autotest_common.sh@1367 -- # local nb 00:45:16.528 09:14:18 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:45:16.528 09:14:18 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:45:16.528 { 00:45:16.528 "name": "nvme0n1", 00:45:16.528 "aliases": [ 00:45:16.528 "57511bf6-f49d-4692-8087-d7043507cfda" 00:45:16.528 ], 00:45:16.528 "product_name": "NVMe disk", 00:45:16.528 "block_size": 4096, 00:45:16.528 "num_blocks": 1310720, 00:45:16.528 "uuid": "57511bf6-f49d-4692-8087-d7043507cfda", 00:45:16.528 "assigned_rate_limits": { 00:45:16.528 "rw_ios_per_sec": 0, 00:45:16.528 "rw_mbytes_per_sec": 0, 00:45:16.528 "r_mbytes_per_sec": 0, 00:45:16.528 "w_mbytes_per_sec": 0 00:45:16.528 }, 00:45:16.528 "claimed": true, 00:45:16.528 "claim_type": "read_many_write_one", 00:45:16.528 "zoned": false, 00:45:16.528 "supported_io_types": { 00:45:16.528 "read": true, 00:45:16.528 "write": true, 00:45:16.528 "unmap": true, 00:45:16.528 "write_zeroes": true, 00:45:16.528 "flush": true, 00:45:16.528 "reset": true, 00:45:16.528 "compare": true, 00:45:16.528 "compare_and_write": false, 00:45:16.528 "abort": true, 00:45:16.528 "nvme_admin": true, 00:45:16.528 "nvme_io": true 00:45:16.528 }, 00:45:16.528 "driver_specific": { 00:45:16.528 "nvme": [ 00:45:16.528 { 00:45:16.528 "pci_address": "0000:00:11.0", 00:45:16.528 "trid": { 00:45:16.528 "trtype": "PCIe", 00:45:16.528 "traddr": "0000:00:11.0" 00:45:16.528 }, 00:45:16.528 "ctrlr_data": { 00:45:16.528 "cntlid": 0, 00:45:16.528 "vendor_id": "0x1b36", 00:45:16.528 "model_number": "QEMU NVMe Ctrl", 00:45:16.528 "serial_number": "12341", 00:45:16.528 "firmware_revision": "8.0.0", 00:45:16.528 "subnqn": "nqn.2019-08.org.qemu:12341", 00:45:16.528 "oacs": { 00:45:16.528 "security": 0, 00:45:16.528 "format": 1, 00:45:16.528 "firmware": 0, 00:45:16.528 "ns_manage": 1 00:45:16.528 }, 00:45:16.528 "multi_ctrlr": false, 00:45:16.528 "ana_reporting": false 00:45:16.528 }, 00:45:16.528 "vs": { 00:45:16.528 "nvme_version": "1.4" 00:45:16.528 }, 00:45:16.528 "ns_data": { 00:45:16.528 "id": 1, 00:45:16.528 "can_share": false 00:45:16.529 } 00:45:16.529 } 00:45:16.529 ], 00:45:16.529 "mp_policy": "active_passive" 00:45:16.529 } 00:45:16.529 } 00:45:16.529 ]' 00:45:16.529 09:14:18 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:45:16.529 09:14:18 -- common/autotest_common.sh@1369 -- # bs=4096 00:45:16.529 09:14:18 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:45:16.787 09:14:18 -- common/autotest_common.sh@1370 -- # nb=1310720 00:45:16.787 09:14:18 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:45:16.787 09:14:18 -- common/autotest_common.sh@1374 -- # echo 5120 00:45:16.787 09:14:18 -- ftl/common.sh@63 -- # base_size=5120 00:45:16.787 09:14:18 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:45:16.787 09:14:18 -- ftl/common.sh@67 -- # clear_lvols 00:45:16.787 09:14:18 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:45:16.787 09:14:18 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:45:17.046 09:14:18 -- ftl/common.sh@28 -- # stores=4ab457df-a210-4b27-81eb-0cb3fc6bd50c 00:45:17.046 09:14:18 -- ftl/common.sh@29 -- # for lvs in $stores 00:45:17.046 09:14:18 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4ab457df-a210-4b27-81eb-0cb3fc6bd50c 00:45:17.046 09:14:19 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:45:17.304 09:14:19 -- ftl/common.sh@68 -- # lvs=552a5f49-696a-4198-bf2a-812072e257d0 00:45:17.304 09:14:19 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 552a5f49-696a-4198-bf2a-812072e257d0 00:45:17.562 09:14:19 -- ftl/restore.sh@43 -- # split_bdev=aef4b48f-9f92-46fe-94b3-e633440f9e83 00:45:17.562 09:14:19 -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:45:17.562 09:14:19 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 aef4b48f-9f92-46fe-94b3-e633440f9e83 00:45:17.562 09:14:19 -- ftl/common.sh@35 -- # local name=nvc0 00:45:17.562 09:14:19 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:45:17.562 09:14:19 -- ftl/common.sh@37 -- # local base_bdev=aef4b48f-9f92-46fe-94b3-e633440f9e83 00:45:17.562 09:14:19 -- ftl/common.sh@38 -- # local cache_size= 00:45:17.562 09:14:19 -- ftl/common.sh@41 -- # get_bdev_size aef4b48f-9f92-46fe-94b3-e633440f9e83 00:45:17.562 09:14:19 -- common/autotest_common.sh@1364 -- # local bdev_name=aef4b48f-9f92-46fe-94b3-e633440f9e83 00:45:17.562 09:14:19 -- common/autotest_common.sh@1365 -- # local bdev_info 00:45:17.562 09:14:19 -- common/autotest_common.sh@1366 -- # local bs 00:45:17.562 09:14:19 -- common/autotest_common.sh@1367 -- # local nb 00:45:17.562 09:14:19 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aef4b48f-9f92-46fe-94b3-e633440f9e83 00:45:17.820 09:14:19 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:45:17.820 { 00:45:17.820 "name": "aef4b48f-9f92-46fe-94b3-e633440f9e83", 00:45:17.820 "aliases": [ 00:45:17.820 "lvs/nvme0n1p0" 00:45:17.820 ], 00:45:17.820 "product_name": "Logical Volume", 00:45:17.820 "block_size": 4096, 00:45:17.820 "num_blocks": 26476544, 00:45:17.820 "uuid": "aef4b48f-9f92-46fe-94b3-e633440f9e83", 00:45:17.820 "assigned_rate_limits": { 00:45:17.820 "rw_ios_per_sec": 0, 00:45:17.820 "rw_mbytes_per_sec": 0, 00:45:17.820 "r_mbytes_per_sec": 0, 00:45:17.820 "w_mbytes_per_sec": 0 00:45:17.820 }, 00:45:17.820 "claimed": false, 00:45:17.820 "zoned": false, 00:45:17.820 "supported_io_types": { 00:45:17.820 "read": true, 00:45:17.820 "write": true, 00:45:17.820 "unmap": true, 00:45:17.820 "write_zeroes": true, 00:45:17.820 "flush": false, 00:45:17.820 "reset": true, 00:45:17.820 "compare": false, 00:45:17.820 "compare_and_write": false, 00:45:17.820 "abort": false, 00:45:17.820 "nvme_admin": false, 00:45:17.820 "nvme_io": false 00:45:17.820 }, 00:45:17.820 "driver_specific": { 00:45:17.820 "lvol": { 00:45:17.820 "lvol_store_uuid": "552a5f49-696a-4198-bf2a-812072e257d0", 00:45:17.820 "base_bdev": "nvme0n1", 00:45:17.820 "thin_provision": true, 00:45:17.820 "snapshot": false, 00:45:17.820 "clone": false, 00:45:17.820 "esnap_clone": false 00:45:17.820 } 00:45:17.820 } 00:45:17.820 } 00:45:17.820 ]' 00:45:17.820 09:14:19 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:45:18.078 09:14:19 -- common/autotest_common.sh@1369 -- # bs=4096 00:45:18.078 09:14:19 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:45:18.078 09:14:19 -- common/autotest_common.sh@1370 -- # nb=26476544 00:45:18.078 09:14:19 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:45:18.078 09:14:19 -- common/autotest_common.sh@1374 -- # echo 103424 00:45:18.078 09:14:19 -- ftl/common.sh@41 -- # local base_size=5171 00:45:18.078 09:14:19 -- ftl/common.sh@44 -- # local nvc_bdev 00:45:18.078 09:14:19 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:45:18.337 09:14:20 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:45:18.337 09:14:20 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:45:18.337 09:14:20 -- ftl/common.sh@48 -- # get_bdev_size aef4b48f-9f92-46fe-94b3-e633440f9e83 00:45:18.337 09:14:20 -- common/autotest_common.sh@1364 -- # local bdev_name=aef4b48f-9f92-46fe-94b3-e633440f9e83 00:45:18.337 09:14:20 -- common/autotest_common.sh@1365 -- # local bdev_info 00:45:18.337 09:14:20 -- common/autotest_common.sh@1366 -- # local bs 00:45:18.337 09:14:20 -- common/autotest_common.sh@1367 -- # local nb 00:45:18.337 09:14:20 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aef4b48f-9f92-46fe-94b3-e633440f9e83 00:45:18.596 09:14:20 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:45:18.596 { 00:45:18.596 "name": "aef4b48f-9f92-46fe-94b3-e633440f9e83", 00:45:18.596 "aliases": [ 00:45:18.596 "lvs/nvme0n1p0" 00:45:18.596 ], 00:45:18.596 "product_name": "Logical Volume", 00:45:18.596 "block_size": 4096, 00:45:18.596 "num_blocks": 26476544, 00:45:18.596 "uuid": "aef4b48f-9f92-46fe-94b3-e633440f9e83", 00:45:18.596 "assigned_rate_limits": { 00:45:18.596 "rw_ios_per_sec": 0, 00:45:18.596 "rw_mbytes_per_sec": 0, 00:45:18.596 "r_mbytes_per_sec": 0, 00:45:18.596 "w_mbytes_per_sec": 0 00:45:18.596 }, 00:45:18.596 "claimed": false, 00:45:18.596 "zoned": false, 00:45:18.596 "supported_io_types": { 00:45:18.596 "read": true, 00:45:18.596 "write": true, 00:45:18.596 "unmap": true, 00:45:18.596 "write_zeroes": true, 00:45:18.596 "flush": false, 00:45:18.596 "reset": true, 00:45:18.596 "compare": false, 00:45:18.596 "compare_and_write": false, 00:45:18.596 "abort": false, 00:45:18.596 "nvme_admin": false, 00:45:18.596 "nvme_io": false 00:45:18.596 }, 00:45:18.596 "driver_specific": { 00:45:18.596 "lvol": { 00:45:18.596 "lvol_store_uuid": "552a5f49-696a-4198-bf2a-812072e257d0", 00:45:18.596 "base_bdev": "nvme0n1", 00:45:18.596 "thin_provision": true, 00:45:18.596 "snapshot": false, 00:45:18.596 "clone": false, 00:45:18.596 "esnap_clone": false 00:45:18.596 } 00:45:18.596 } 00:45:18.596 } 00:45:18.596 ]' 00:45:18.596 09:14:20 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:45:18.596 09:14:20 -- common/autotest_common.sh@1369 -- # bs=4096 00:45:18.596 09:14:20 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:45:18.596 09:14:20 -- common/autotest_common.sh@1370 -- # nb=26476544 00:45:18.596 09:14:20 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:45:18.596 09:14:20 -- common/autotest_common.sh@1374 -- # echo 103424 00:45:18.596 09:14:20 -- ftl/common.sh@48 -- # cache_size=5171 00:45:18.596 09:14:20 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:45:18.855 09:14:20 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:45:18.855 09:14:20 -- ftl/restore.sh@48 -- # get_bdev_size aef4b48f-9f92-46fe-94b3-e633440f9e83 00:45:18.855 09:14:20 -- common/autotest_common.sh@1364 -- # local bdev_name=aef4b48f-9f92-46fe-94b3-e633440f9e83 00:45:18.855 09:14:20 -- common/autotest_common.sh@1365 -- # local bdev_info 00:45:18.855 09:14:20 -- common/autotest_common.sh@1366 -- # local bs 00:45:18.855 09:14:20 -- common/autotest_common.sh@1367 -- # local nb 00:45:18.855 09:14:20 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aef4b48f-9f92-46fe-94b3-e633440f9e83 00:45:19.113 09:14:21 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:45:19.113 { 00:45:19.113 "name": "aef4b48f-9f92-46fe-94b3-e633440f9e83", 00:45:19.113 "aliases": [ 00:45:19.114 "lvs/nvme0n1p0" 00:45:19.114 ], 00:45:19.114 "product_name": "Logical Volume", 00:45:19.114 "block_size": 4096, 00:45:19.114 "num_blocks": 26476544, 00:45:19.114 "uuid": "aef4b48f-9f92-46fe-94b3-e633440f9e83", 00:45:19.114 "assigned_rate_limits": { 00:45:19.114 "rw_ios_per_sec": 0, 00:45:19.114 "rw_mbytes_per_sec": 0, 00:45:19.114 "r_mbytes_per_sec": 0, 00:45:19.114 "w_mbytes_per_sec": 0 00:45:19.114 }, 00:45:19.114 "claimed": false, 00:45:19.114 "zoned": false, 00:45:19.114 "supported_io_types": { 00:45:19.114 "read": true, 00:45:19.114 "write": true, 00:45:19.114 "unmap": true, 00:45:19.114 "write_zeroes": true, 00:45:19.114 "flush": false, 00:45:19.114 "reset": true, 00:45:19.114 "compare": false, 00:45:19.114 "compare_and_write": false, 00:45:19.114 "abort": false, 00:45:19.114 "nvme_admin": false, 00:45:19.114 "nvme_io": false 00:45:19.114 }, 00:45:19.114 "driver_specific": { 00:45:19.114 "lvol": { 00:45:19.114 "lvol_store_uuid": "552a5f49-696a-4198-bf2a-812072e257d0", 00:45:19.114 "base_bdev": "nvme0n1", 00:45:19.114 "thin_provision": true, 00:45:19.114 "snapshot": false, 00:45:19.114 "clone": false, 00:45:19.114 "esnap_clone": false 00:45:19.114 } 00:45:19.114 } 00:45:19.114 } 00:45:19.114 ]' 00:45:19.114 09:14:21 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:45:19.114 09:14:21 -- common/autotest_common.sh@1369 -- # bs=4096 00:45:19.114 09:14:21 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:45:19.372 09:14:21 -- common/autotest_common.sh@1370 -- # nb=26476544 00:45:19.372 09:14:21 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:45:19.372 09:14:21 -- common/autotest_common.sh@1374 -- # echo 103424 00:45:19.372 09:14:21 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:45:19.372 09:14:21 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d aef4b48f-9f92-46fe-94b3-e633440f9e83 --l2p_dram_limit 10' 00:45:19.372 09:14:21 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:45:19.372 09:14:21 -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:45:19.372 09:14:21 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:45:19.372 09:14:21 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:45:19.372 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:45:19.372 09:14:21 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d aef4b48f-9f92-46fe-94b3-e633440f9e83 --l2p_dram_limit 10 -c nvc0n1p0 00:45:19.632 [2024-04-18 09:14:21.514130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.632 [2024-04-18 09:14:21.514459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:19.632 [2024-04-18 09:14:21.514578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:45:19.632 [2024-04-18 09:14:21.514623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.632 [2024-04-18 09:14:21.514742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.632 [2024-04-18 09:14:21.514911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:19.632 [2024-04-18 09:14:21.514983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:45:19.632 [2024-04-18 09:14:21.515018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.632 [2024-04-18 09:14:21.515073] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:19.632 [2024-04-18 09:14:21.516492] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:19.632 [2024-04-18 09:14:21.516676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.632 [2024-04-18 09:14:21.516764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:19.632 [2024-04-18 09:14:21.516812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.605 ms 00:45:19.632 [2024-04-18 09:14:21.516905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.632 [2024-04-18 09:14:21.517172] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5a06ffee-18e7-4484-82dd-d6d34a9070af 00:45:19.632 [2024-04-18 09:14:21.518824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.632 [2024-04-18 09:14:21.518972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:45:19.632 [2024-04-18 09:14:21.519060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:45:19.632 [2024-04-18 09:14:21.519103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.632 [2024-04-18 09:14:21.527021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.632 [2024-04-18 09:14:21.527275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:19.632 [2024-04-18 09:14:21.527394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.808 ms 00:45:19.632 [2024-04-18 09:14:21.527442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.632 [2024-04-18 09:14:21.527597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.632 [2024-04-18 09:14:21.527689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:19.632 [2024-04-18 09:14:21.527746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:45:19.632 [2024-04-18 09:14:21.527783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.632 [2024-04-18 09:14:21.527883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.632 [2024-04-18 09:14:21.527940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:19.632 [2024-04-18 09:14:21.528047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:45:19.632 [2024-04-18 09:14:21.528097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.632 [2024-04-18 09:14:21.528156] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:19.632 [2024-04-18 09:14:21.535282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.632 [2024-04-18 09:14:21.535505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:19.632 [2024-04-18 09:14:21.535653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.129 ms 00:45:19.632 [2024-04-18 09:14:21.535696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.632 [2024-04-18 09:14:21.535782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.632 [2024-04-18 09:14:21.535821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:19.632 [2024-04-18 09:14:21.535915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:45:19.632 [2024-04-18 09:14:21.535989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.632 [2024-04-18 09:14:21.536099] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:45:19.632 [2024-04-18 09:14:21.536258] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:45:19.632 [2024-04-18 09:14:21.536326] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:19.632 [2024-04-18 09:14:21.536453] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:45:19.632 [2024-04-18 09:14:21.536576] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:19.632 [2024-04-18 09:14:21.536677] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:19.632 [2024-04-18 09:14:21.536745] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:45:19.632 [2024-04-18 09:14:21.536833] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:19.632 [2024-04-18 09:14:21.536890] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:45:19.632 [2024-04-18 09:14:21.536926] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:45:19.632 [2024-04-18 09:14:21.536982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.632 [2024-04-18 09:14:21.537019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:19.632 [2024-04-18 09:14:21.537057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.882 ms 00:45:19.632 [2024-04-18 09:14:21.537103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.632 [2024-04-18 09:14:21.537310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.632 [2024-04-18 09:14:21.537350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:19.632 [2024-04-18 09:14:21.537393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:45:19.632 [2024-04-18 09:14:21.537446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.632 [2024-04-18 09:14:21.537560] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:19.632 [2024-04-18 09:14:21.537601] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:19.632 [2024-04-18 09:14:21.537641] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:19.632 [2024-04-18 09:14:21.537677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:19.632 [2024-04-18 09:14:21.537772] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:19.632 [2024-04-18 09:14:21.537815] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:19.632 [2024-04-18 09:14:21.537988] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:45:19.632 [2024-04-18 09:14:21.538030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:19.632 [2024-04-18 09:14:21.538069] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:45:19.632 [2024-04-18 09:14:21.538104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:19.632 [2024-04-18 09:14:21.538193] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:19.632 [2024-04-18 09:14:21.538234] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:45:19.632 [2024-04-18 09:14:21.538273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:19.632 [2024-04-18 09:14:21.538309] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:19.632 [2024-04-18 09:14:21.538347] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:45:19.632 [2024-04-18 09:14:21.538473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:19.632 [2024-04-18 09:14:21.538514] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:19.632 [2024-04-18 09:14:21.538549] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:45:19.632 [2024-04-18 09:14:21.538627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:19.632 [2024-04-18 09:14:21.538667] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:45:19.632 [2024-04-18 09:14:21.538754] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:45:19.632 [2024-04-18 09:14:21.538793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:45:19.632 [2024-04-18 09:14:21.538868] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:19.632 [2024-04-18 09:14:21.538907] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:45:19.632 [2024-04-18 09:14:21.538988] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:45:19.632 [2024-04-18 09:14:21.539027] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:19.632 [2024-04-18 09:14:21.539065] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:45:19.632 [2024-04-18 09:14:21.539135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:45:19.632 [2024-04-18 09:14:21.539178] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:19.632 [2024-04-18 09:14:21.539255] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:45:19.632 [2024-04-18 09:14:21.539298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:45:19.632 [2024-04-18 09:14:21.539365] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:19.632 [2024-04-18 09:14:21.539424] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:45:19.632 [2024-04-18 09:14:21.539459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:45:19.632 [2024-04-18 09:14:21.539521] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:19.633 [2024-04-18 09:14:21.539602] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:45:19.633 [2024-04-18 09:14:21.539657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:19.633 [2024-04-18 09:14:21.539692] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:19.633 [2024-04-18 09:14:21.539728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:45:19.633 [2024-04-18 09:14:21.539762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:19.633 [2024-04-18 09:14:21.539799] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:19.633 [2024-04-18 09:14:21.539834] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:19.633 [2024-04-18 09:14:21.539955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:19.633 [2024-04-18 09:14:21.540023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:19.633 [2024-04-18 09:14:21.540064] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:19.633 [2024-04-18 09:14:21.540100] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:19.633 [2024-04-18 09:14:21.540138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:19.633 [2024-04-18 09:14:21.540240] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:19.633 [2024-04-18 09:14:21.540278] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:19.633 [2024-04-18 09:14:21.540314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:19.633 [2024-04-18 09:14:21.540410] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:19.633 [2024-04-18 09:14:21.540528] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:19.633 [2024-04-18 09:14:21.540632] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:45:19.633 [2024-04-18 09:14:21.540695] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:45:19.633 [2024-04-18 09:14:21.540800] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:45:19.633 [2024-04-18 09:14:21.540863] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:45:19.633 [2024-04-18 09:14:21.540923] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:45:19.633 [2024-04-18 09:14:21.541018] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:45:19.633 [2024-04-18 09:14:21.541091] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:45:19.633 [2024-04-18 09:14:21.541146] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:45:19.633 [2024-04-18 09:14:21.541253] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:45:19.633 [2024-04-18 09:14:21.541311] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:45:19.633 [2024-04-18 09:14:21.541380] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:45:19.633 [2024-04-18 09:14:21.541493] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:45:19.633 [2024-04-18 09:14:21.541559] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:45:19.633 [2024-04-18 09:14:21.541614] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:19.633 [2024-04-18 09:14:21.541719] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:19.633 [2024-04-18 09:14:21.541781] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:19.633 [2024-04-18 09:14:21.541868] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:19.633 [2024-04-18 09:14:21.541987] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:19.633 [2024-04-18 09:14:21.542047] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:19.633 [2024-04-18 09:14:21.542106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.633 [2024-04-18 09:14:21.542188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:19.633 [2024-04-18 09:14:21.542291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.593 ms 00:45:19.633 [2024-04-18 09:14:21.542337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.633 [2024-04-18 09:14:21.571525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.633 [2024-04-18 09:14:21.571811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:19.633 [2024-04-18 09:14:21.571945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.995 ms 00:45:19.633 [2024-04-18 09:14:21.572014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.633 [2024-04-18 09:14:21.572163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.633 [2024-04-18 09:14:21.572238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:19.633 [2024-04-18 09:14:21.572275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:45:19.633 [2024-04-18 09:14:21.572314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.633 [2024-04-18 09:14:21.633462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.633 [2024-04-18 09:14:21.633720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:19.633 [2024-04-18 09:14:21.633882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.029 ms 00:45:19.633 [2024-04-18 09:14:21.633930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.633 [2024-04-18 09:14:21.634011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.633 [2024-04-18 09:14:21.634149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:19.633 [2024-04-18 09:14:21.634200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:19.633 [2024-04-18 09:14:21.634241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.633 [2024-04-18 09:14:21.634805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.633 [2024-04-18 09:14:21.634942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:19.633 [2024-04-18 09:14:21.635029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:45:19.633 [2024-04-18 09:14:21.635073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.633 [2024-04-18 09:14:21.635233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.633 [2024-04-18 09:14:21.635278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:19.633 [2024-04-18 09:14:21.635357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:45:19.633 [2024-04-18 09:14:21.635421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.633 [2024-04-18 09:14:21.662666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.633 [2024-04-18 09:14:21.662922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:19.633 [2024-04-18 09:14:21.663085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.134 ms 00:45:19.633 [2024-04-18 09:14:21.663137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.633 [2024-04-18 09:14:21.680632] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:45:19.633 [2024-04-18 09:14:21.684338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.633 [2024-04-18 09:14:21.684542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:19.633 [2024-04-18 09:14:21.684638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.031 ms 00:45:19.633 [2024-04-18 09:14:21.684679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.891 [2024-04-18 09:14:21.760973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:19.891 [2024-04-18 09:14:21.761263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:45:19.891 [2024-04-18 09:14:21.761408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.203 ms 00:45:19.891 [2024-04-18 09:14:21.761456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:19.891 [2024-04-18 09:14:21.761630] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:45:19.891 [2024-04-18 09:14:21.761783] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:45:22.413 [2024-04-18 09:14:24.065639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.413 [2024-04-18 09:14:24.065980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:45:22.413 [2024-04-18 09:14:24.066106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2303.982 ms 00:45:22.413 [2024-04-18 09:14:24.066151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.413 [2024-04-18 09:14:24.066438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.413 [2024-04-18 09:14:24.066576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:22.413 [2024-04-18 09:14:24.066673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:45:22.413 [2024-04-18 09:14:24.066764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.413 [2024-04-18 09:14:24.113274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.413 [2024-04-18 09:14:24.113578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:45:22.413 [2024-04-18 09:14:24.113737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.370 ms 00:45:22.413 [2024-04-18 09:14:24.113782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.413 [2024-04-18 09:14:24.161404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.413 [2024-04-18 09:14:24.161626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:45:22.413 [2024-04-18 09:14:24.161765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.399 ms 00:45:22.413 [2024-04-18 09:14:24.161812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.413 [2024-04-18 09:14:24.162543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.413 [2024-04-18 09:14:24.162680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:22.413 [2024-04-18 09:14:24.162781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:45:22.413 [2024-04-18 09:14:24.162828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.413 [2024-04-18 09:14:24.273096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.413 [2024-04-18 09:14:24.273412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:45:22.413 [2024-04-18 09:14:24.273573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.132 ms 00:45:22.413 [2024-04-18 09:14:24.273617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.413 [2024-04-18 09:14:24.318229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.413 [2024-04-18 09:14:24.318552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:45:22.413 [2024-04-18 09:14:24.318657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.497 ms 00:45:22.413 [2024-04-18 09:14:24.318701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.413 [2024-04-18 09:14:24.321300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.413 [2024-04-18 09:14:24.321465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:45:22.413 [2024-04-18 09:14:24.321631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.496 ms 00:45:22.413 [2024-04-18 09:14:24.321676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.413 [2024-04-18 09:14:24.368682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.413 [2024-04-18 09:14:24.369028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:22.413 [2024-04-18 09:14:24.369197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.864 ms 00:45:22.413 [2024-04-18 09:14:24.369270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.413 [2024-04-18 09:14:24.369534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.413 [2024-04-18 09:14:24.369743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:22.413 [2024-04-18 09:14:24.369892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:45:22.413 [2024-04-18 09:14:24.370025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.413 [2024-04-18 09:14:24.370274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.413 [2024-04-18 09:14:24.370360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:22.413 [2024-04-18 09:14:24.370537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:45:22.413 [2024-04-18 09:14:24.370691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.413 [2024-04-18 09:14:24.372291] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2857.444 ms, result 0 00:45:22.413 { 00:45:22.413 "name": "ftl0", 00:45:22.413 "uuid": "5a06ffee-18e7-4484-82dd-d6d34a9070af" 00:45:22.413 } 00:45:22.413 09:14:24 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:45:22.413 09:14:24 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:45:22.670 09:14:24 -- ftl/restore.sh@63 -- # echo ']}' 00:45:22.670 09:14:24 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:45:22.927 [2024-04-18 09:14:24.966405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.927 [2024-04-18 09:14:24.966647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:22.927 [2024-04-18 09:14:24.966756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:45:22.927 [2024-04-18 09:14:24.966805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.927 [2024-04-18 09:14:24.966879] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:22.927 [2024-04-18 09:14:24.970893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.927 [2024-04-18 09:14:24.971092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:22.927 [2024-04-18 09:14:24.971208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.870 ms 00:45:22.927 [2024-04-18 09:14:24.971250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.927 [2024-04-18 09:14:24.971637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.927 [2024-04-18 09:14:24.971765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:22.927 [2024-04-18 09:14:24.971889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:45:22.927 [2024-04-18 09:14:24.971945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.927 [2024-04-18 09:14:24.975005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.927 [2024-04-18 09:14:24.975157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:22.927 [2024-04-18 09:14:24.975252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.999 ms 00:45:22.927 [2024-04-18 09:14:24.975293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.927 [2024-04-18 09:14:24.981364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.927 [2024-04-18 09:14:24.981546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:45:22.927 [2024-04-18 09:14:24.981661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.005 ms 00:45:22.927 [2024-04-18 09:14:24.981702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.927 [2024-04-18 09:14:25.028819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.927 [2024-04-18 09:14:25.029060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:22.927 [2024-04-18 09:14:25.029219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.940 ms 00:45:22.927 [2024-04-18 09:14:25.029264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.186 [2024-04-18 09:14:25.056083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:23.186 [2024-04-18 09:14:25.056332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:23.186 [2024-04-18 09:14:25.056468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.713 ms 00:45:23.186 [2024-04-18 09:14:25.056516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.186 [2024-04-18 09:14:25.056757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:23.186 [2024-04-18 09:14:25.056810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:23.186 [2024-04-18 09:14:25.056855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:45:23.186 [2024-04-18 09:14:25.056956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.186 [2024-04-18 09:14:25.104864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:23.186 [2024-04-18 09:14:25.105154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:45:23.186 [2024-04-18 09:14:25.105299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.836 ms 00:45:23.186 [2024-04-18 09:14:25.105418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.186 [2024-04-18 09:14:25.152615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:23.186 [2024-04-18 09:14:25.152875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:45:23.186 [2024-04-18 09:14:25.152972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.078 ms 00:45:23.186 [2024-04-18 09:14:25.153014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.186 [2024-04-18 09:14:25.201790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:23.186 [2024-04-18 09:14:25.202038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:23.186 [2024-04-18 09:14:25.202139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.667 ms 00:45:23.186 [2024-04-18 09:14:25.202182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.186 [2024-04-18 09:14:25.250639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:23.186 [2024-04-18 09:14:25.250897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:23.186 [2024-04-18 09:14:25.251026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.197 ms 00:45:23.186 [2024-04-18 09:14:25.251069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.186 [2024-04-18 09:14:25.251224] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:23.186 [2024-04-18 09:14:25.251283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.251516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.251621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.251691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.251832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.251945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.252075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.252174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.252296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.252409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.252592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.252661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.252795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.252856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.252912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.253028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.253092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.253151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.253253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.253325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.253440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.253504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.253605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.253668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.253769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.253835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.253967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.254027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.254121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.254300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.254361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.254477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.254625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.254689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.254832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.254914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.255017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.255117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.255218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.255314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.255428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.255595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.255698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.255793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.255980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.256050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.256143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.256248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.256406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.256506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.256601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.256733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.256846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.256944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.257006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.257111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:23.186 [2024-04-18 09:14:25.257171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.257266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.257326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.257449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.257546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.257643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.257705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.257826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.257925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.258100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.258249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.258366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.258482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.258584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.258650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.258751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.258811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.258931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.259028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.259094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.259193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.259253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.259342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.259458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.259519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.259617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.259674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.259793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.259847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.260089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.260192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.260305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.260367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.260448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.260590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.260701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.260800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.260864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.260923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.261024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.261138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.261202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.261295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.261418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:23.187 [2024-04-18 09:14:25.261635] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:23.187 [2024-04-18 09:14:25.261684] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a06ffee-18e7-4484-82dd-d6d34a9070af 00:45:23.187 [2024-04-18 09:14:25.261804] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:45:23.187 [2024-04-18 09:14:25.261850] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:45:23.187 [2024-04-18 09:14:25.261932] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:45:23.187 [2024-04-18 09:14:25.261979] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:45:23.187 [2024-04-18 09:14:25.262015] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:23.187 [2024-04-18 09:14:25.262092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:23.187 [2024-04-18 09:14:25.262133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:23.187 [2024-04-18 09:14:25.262171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:23.187 [2024-04-18 09:14:25.262398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:23.187 [2024-04-18 09:14:25.262461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:23.187 [2024-04-18 09:14:25.262539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:23.187 [2024-04-18 09:14:25.262586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.239 ms 00:45:23.187 [2024-04-18 09:14:25.262654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.187 [2024-04-18 09:14:25.286196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:23.187 [2024-04-18 09:14:25.286448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:23.187 [2024-04-18 09:14:25.286551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.392 ms 00:45:23.187 [2024-04-18 09:14:25.286594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.187 [2024-04-18 09:14:25.287047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:23.187 [2024-04-18 09:14:25.287161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:23.445 [2024-04-18 09:14:25.287250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:45:23.445 [2024-04-18 09:14:25.287293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.445 [2024-04-18 09:14:25.368665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:23.445 [2024-04-18 09:14:25.368925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:23.445 [2024-04-18 09:14:25.369038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:23.445 [2024-04-18 09:14:25.369080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.445 [2024-04-18 09:14:25.369196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:23.445 [2024-04-18 09:14:25.369244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:23.445 [2024-04-18 09:14:25.369340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:23.445 [2024-04-18 09:14:25.369402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.445 [2024-04-18 09:14:25.369653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:23.445 [2024-04-18 09:14:25.369744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:23.445 [2024-04-18 09:14:25.369849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:23.445 [2024-04-18 09:14:25.369891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.445 [2024-04-18 09:14:25.370016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:23.445 [2024-04-18 09:14:25.370070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:23.445 [2024-04-18 09:14:25.370195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:23.445 [2024-04-18 09:14:25.370249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.445 [2024-04-18 09:14:25.505847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:23.445 [2024-04-18 09:14:25.506115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:23.445 [2024-04-18 09:14:25.506219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:23.445 [2024-04-18 09:14:25.506260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.702 [2024-04-18 09:14:25.559109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:23.702 [2024-04-18 09:14:25.559394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:23.702 [2024-04-18 09:14:25.559487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:23.702 [2024-04-18 09:14:25.559566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.702 [2024-04-18 09:14:25.559717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:23.702 [2024-04-18 09:14:25.559779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:23.702 [2024-04-18 09:14:25.559830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:23.702 [2024-04-18 09:14:25.559863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.702 [2024-04-18 09:14:25.559957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:23.702 [2024-04-18 09:14:25.560030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:23.702 [2024-04-18 09:14:25.560085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:23.702 [2024-04-18 09:14:25.560117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.702 [2024-04-18 09:14:25.560287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:23.702 [2024-04-18 09:14:25.560502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:23.702 [2024-04-18 09:14:25.560553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:23.702 [2024-04-18 09:14:25.560588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.702 [2024-04-18 09:14:25.560675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:23.702 [2024-04-18 09:14:25.560763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:23.702 [2024-04-18 09:14:25.560813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:23.702 [2024-04-18 09:14:25.560850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.702 [2024-04-18 09:14:25.560924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:23.702 [2024-04-18 09:14:25.560962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:23.702 [2024-04-18 09:14:25.561001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:23.702 [2024-04-18 09:14:25.561049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.702 [2024-04-18 09:14:25.561166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:23.702 [2024-04-18 09:14:25.561203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:23.702 [2024-04-18 09:14:25.561242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:23.702 [2024-04-18 09:14:25.561275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:23.702 [2024-04-18 09:14:25.561457] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 595.009 ms, result 0 00:45:23.702 true 00:45:23.702 09:14:25 -- ftl/restore.sh@66 -- # killprocess 79704 00:45:23.702 09:14:25 -- common/autotest_common.sh@936 -- # '[' -z 79704 ']' 00:45:23.702 09:14:25 -- common/autotest_common.sh@940 -- # kill -0 79704 00:45:23.702 09:14:25 -- common/autotest_common.sh@941 -- # uname 00:45:23.703 09:14:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:45:23.703 09:14:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79704 00:45:23.703 killing process with pid 79704 00:45:23.703 09:14:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:45:23.703 09:14:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:45:23.703 09:14:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79704' 00:45:23.703 09:14:25 -- common/autotest_common.sh@955 -- # kill 79704 00:45:23.703 09:14:25 -- common/autotest_common.sh@960 -- # wait 79704 00:45:30.271 09:14:31 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:45:34.489 262144+0 records in 00:45:34.489 262144+0 records out 00:45:34.489 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.59851 s, 233 MB/s 00:45:34.489 09:14:35 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:45:36.389 09:14:38 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:36.389 [2024-04-18 09:14:38.110892] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:45:36.389 [2024-04-18 09:14:38.111265] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79957 ] 00:45:36.389 [2024-04-18 09:14:38.276040] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:36.646 [2024-04-18 09:14:38.586958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:37.212 [2024-04-18 09:14:39.098647] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:37.212 [2024-04-18 09:14:39.098935] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:37.212 [2024-04-18 09:14:39.256186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.212 [2024-04-18 09:14:39.256479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:37.212 [2024-04-18 09:14:39.256592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:45:37.213 [2024-04-18 09:14:39.256637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.213 [2024-04-18 09:14:39.256806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.213 [2024-04-18 09:14:39.256910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:37.213 [2024-04-18 09:14:39.256992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:45:37.213 [2024-04-18 09:14:39.257079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.213 [2024-04-18 09:14:39.257139] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:37.213 [2024-04-18 09:14:39.258654] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:37.213 [2024-04-18 09:14:39.258819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.213 [2024-04-18 09:14:39.258987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:37.213 [2024-04-18 09:14:39.259030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:45:37.213 [2024-04-18 09:14:39.259085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.213 [2024-04-18 09:14:39.260687] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:45:37.213 [2024-04-18 09:14:39.284348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.213 [2024-04-18 09:14:39.284589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:45:37.213 [2024-04-18 09:14:39.284695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.660 ms 00:45:37.213 [2024-04-18 09:14:39.284778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.213 [2024-04-18 09:14:39.284900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.213 [2024-04-18 09:14:39.284948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:45:37.213 [2024-04-18 09:14:39.285023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:45:37.213 [2024-04-18 09:14:39.285093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.213 [2024-04-18 09:14:39.293063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.213 [2024-04-18 09:14:39.293296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:37.213 [2024-04-18 09:14:39.293467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.823 ms 00:45:37.213 [2024-04-18 09:14:39.293512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.213 [2024-04-18 09:14:39.293699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.213 [2024-04-18 09:14:39.293808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:37.213 [2024-04-18 09:14:39.293886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:45:37.213 [2024-04-18 09:14:39.293986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.213 [2024-04-18 09:14:39.294086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.213 [2024-04-18 09:14:39.294135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:37.213 [2024-04-18 09:14:39.294217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:45:37.213 [2024-04-18 09:14:39.294300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.213 [2024-04-18 09:14:39.294377] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:37.213 [2024-04-18 09:14:39.300946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.213 [2024-04-18 09:14:39.301109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:37.213 [2024-04-18 09:14:39.301225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.587 ms 00:45:37.213 [2024-04-18 09:14:39.301265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.213 [2024-04-18 09:14:39.301368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.213 [2024-04-18 09:14:39.301509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:37.213 [2024-04-18 09:14:39.301585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:45:37.213 [2024-04-18 09:14:39.301680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.213 [2024-04-18 09:14:39.301798] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:45:37.213 [2024-04-18 09:14:39.301907] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:45:37.213 [2024-04-18 09:14:39.302037] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:45:37.213 [2024-04-18 09:14:39.302101] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:45:37.213 [2024-04-18 09:14:39.302256] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:45:37.213 [2024-04-18 09:14:39.302326] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:37.213 [2024-04-18 09:14:39.302496] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:45:37.213 [2024-04-18 09:14:39.302674] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:37.213 [2024-04-18 09:14:39.302759] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:37.213 [2024-04-18 09:14:39.302821] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:45:37.213 [2024-04-18 09:14:39.302854] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:37.213 [2024-04-18 09:14:39.302888] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:45:37.213 [2024-04-18 09:14:39.302955] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:45:37.213 [2024-04-18 09:14:39.302991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.213 [2024-04-18 09:14:39.303024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:37.213 [2024-04-18 09:14:39.303058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.212 ms 00:45:37.213 [2024-04-18 09:14:39.303091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.213 [2024-04-18 09:14:39.303250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.213 [2024-04-18 09:14:39.303356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:37.213 [2024-04-18 09:14:39.303453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:45:37.213 [2024-04-18 09:14:39.303536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.213 [2024-04-18 09:14:39.303657] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:37.213 [2024-04-18 09:14:39.303731] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:37.213 [2024-04-18 09:14:39.303770] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:37.213 [2024-04-18 09:14:39.303890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:37.213 [2024-04-18 09:14:39.303928] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:37.213 [2024-04-18 09:14:39.304056] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:37.213 [2024-04-18 09:14:39.304097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:45:37.213 [2024-04-18 09:14:39.304169] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:37.213 [2024-04-18 09:14:39.304273] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:45:37.213 [2024-04-18 09:14:39.304313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:37.213 [2024-04-18 09:14:39.304434] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:37.213 [2024-04-18 09:14:39.304475] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:45:37.213 [2024-04-18 09:14:39.304574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:37.213 [2024-04-18 09:14:39.304614] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:37.213 [2024-04-18 09:14:39.304648] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:45:37.213 [2024-04-18 09:14:39.304712] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:37.213 [2024-04-18 09:14:39.304813] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:37.213 [2024-04-18 09:14:39.304853] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:45:37.213 [2024-04-18 09:14:39.304945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:37.213 [2024-04-18 09:14:39.304985] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:45:37.213 [2024-04-18 09:14:39.305020] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:45:37.213 [2024-04-18 09:14:39.305091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:45:37.213 [2024-04-18 09:14:39.305125] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:37.213 [2024-04-18 09:14:39.305158] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:45:37.213 [2024-04-18 09:14:39.305191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:45:37.213 [2024-04-18 09:14:39.305223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:37.213 [2024-04-18 09:14:39.305283] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:45:37.213 [2024-04-18 09:14:39.305316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:45:37.213 [2024-04-18 09:14:39.305348] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:37.213 [2024-04-18 09:14:39.305392] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:45:37.213 [2024-04-18 09:14:39.305452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:45:37.213 [2024-04-18 09:14:39.305556] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:37.213 [2024-04-18 09:14:39.305595] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:45:37.213 [2024-04-18 09:14:39.305628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:45:37.213 [2024-04-18 09:14:39.305789] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:37.213 [2024-04-18 09:14:39.305831] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:45:37.213 [2024-04-18 09:14:39.305914] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:37.213 [2024-04-18 09:14:39.305954] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:37.213 [2024-04-18 09:14:39.306024] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:45:37.213 [2024-04-18 09:14:39.306137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:37.213 [2024-04-18 09:14:39.306175] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:37.213 [2024-04-18 09:14:39.306231] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:37.213 [2024-04-18 09:14:39.306271] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:37.213 [2024-04-18 09:14:39.306311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:37.213 [2024-04-18 09:14:39.306345] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:37.213 [2024-04-18 09:14:39.306451] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:37.214 [2024-04-18 09:14:39.306492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:37.214 [2024-04-18 09:14:39.306526] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:37.214 [2024-04-18 09:14:39.306568] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:37.214 [2024-04-18 09:14:39.306601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:37.214 [2024-04-18 09:14:39.306672] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:37.214 [2024-04-18 09:14:39.306794] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:37.214 [2024-04-18 09:14:39.306922] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:45:37.214 [2024-04-18 09:14:39.307027] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:45:37.214 [2024-04-18 09:14:39.307179] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:45:37.214 [2024-04-18 09:14:39.307236] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:45:37.214 [2024-04-18 09:14:39.307382] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:45:37.214 [2024-04-18 09:14:39.307484] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:45:37.214 [2024-04-18 09:14:39.307535] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:45:37.214 [2024-04-18 09:14:39.307586] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:45:37.214 [2024-04-18 09:14:39.307652] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:45:37.214 [2024-04-18 09:14:39.307702] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:45:37.214 [2024-04-18 09:14:39.307753] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:45:37.214 [2024-04-18 09:14:39.307873] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:45:37.214 [2024-04-18 09:14:39.307931] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:45:37.214 [2024-04-18 09:14:39.308090] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:37.214 [2024-04-18 09:14:39.308152] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:37.214 [2024-04-18 09:14:39.308262] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:37.214 [2024-04-18 09:14:39.308319] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:37.214 [2024-04-18 09:14:39.308392] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:37.214 [2024-04-18 09:14:39.308452] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:37.214 [2024-04-18 09:14:39.308510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.214 [2024-04-18 09:14:39.308557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:37.214 [2024-04-18 09:14:39.308593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.891 ms 00:45:37.214 [2024-04-18 09:14:39.308628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.473 [2024-04-18 09:14:39.337282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.473 [2024-04-18 09:14:39.337569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:37.473 [2024-04-18 09:14:39.337726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.508 ms 00:45:37.473 [2024-04-18 09:14:39.337811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.473 [2024-04-18 09:14:39.337946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.473 [2024-04-18 09:14:39.337988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:37.473 [2024-04-18 09:14:39.338060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:45:37.473 [2024-04-18 09:14:39.338129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.473 [2024-04-18 09:14:39.408018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.473 [2024-04-18 09:14:39.408272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:37.473 [2024-04-18 09:14:39.408366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.775 ms 00:45:37.473 [2024-04-18 09:14:39.408475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.473 [2024-04-18 09:14:39.408568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.473 [2024-04-18 09:14:39.408633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:37.473 [2024-04-18 09:14:39.408708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:37.473 [2024-04-18 09:14:39.408746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.473 [2024-04-18 09:14:39.409362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.473 [2024-04-18 09:14:39.409506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:37.473 [2024-04-18 09:14:39.409586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:45:37.473 [2024-04-18 09:14:39.409625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.473 [2024-04-18 09:14:39.409825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.473 [2024-04-18 09:14:39.409917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:37.473 [2024-04-18 09:14:39.409992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:45:37.473 [2024-04-18 09:14:39.410063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.473 [2024-04-18 09:14:39.436252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.473 [2024-04-18 09:14:39.436484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:37.473 [2024-04-18 09:14:39.436588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.130 ms 00:45:37.473 [2024-04-18 09:14:39.436629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.473 [2024-04-18 09:14:39.460298] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:45:37.473 [2024-04-18 09:14:39.460568] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:45:37.473 [2024-04-18 09:14:39.460687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.473 [2024-04-18 09:14:39.460726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:45:37.473 [2024-04-18 09:14:39.460828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.850 ms 00:45:37.473 [2024-04-18 09:14:39.460869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.473 [2024-04-18 09:14:39.497934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.473 [2024-04-18 09:14:39.498205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:45:37.473 [2024-04-18 09:14:39.498301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.982 ms 00:45:37.473 [2024-04-18 09:14:39.498344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.473 [2024-04-18 09:14:39.521924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.473 [2024-04-18 09:14:39.522127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:45:37.473 [2024-04-18 09:14:39.522254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.388 ms 00:45:37.473 [2024-04-18 09:14:39.522352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.473 [2024-04-18 09:14:39.545099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.473 [2024-04-18 09:14:39.545362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:45:37.473 [2024-04-18 09:14:39.545567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.627 ms 00:45:37.473 [2024-04-18 09:14:39.545606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.473 [2024-04-18 09:14:39.546295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.473 [2024-04-18 09:14:39.546459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:37.473 [2024-04-18 09:14:39.546550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:45:37.473 [2024-04-18 09:14:39.546589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.731 [2024-04-18 09:14:39.658213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.731 [2024-04-18 09:14:39.658487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:45:37.731 [2024-04-18 09:14:39.658586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.530 ms 00:45:37.731 [2024-04-18 09:14:39.658663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.731 [2024-04-18 09:14:39.676429] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:45:37.731 [2024-04-18 09:14:39.680221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.731 [2024-04-18 09:14:39.680404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:37.731 [2024-04-18 09:14:39.680498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.456 ms 00:45:37.731 [2024-04-18 09:14:39.680540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.731 [2024-04-18 09:14:39.680720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.731 [2024-04-18 09:14:39.680829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:45:37.731 [2024-04-18 09:14:39.680920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:45:37.731 [2024-04-18 09:14:39.681028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.731 [2024-04-18 09:14:39.681168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.731 [2024-04-18 09:14:39.681210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:37.731 [2024-04-18 09:14:39.681291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:45:37.731 [2024-04-18 09:14:39.681404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.731 [2024-04-18 09:14:39.683919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.731 [2024-04-18 09:14:39.684079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:45:37.731 [2024-04-18 09:14:39.684164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.413 ms 00:45:37.731 [2024-04-18 09:14:39.684211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.731 [2024-04-18 09:14:39.684318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.731 [2024-04-18 09:14:39.684358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:37.731 [2024-04-18 09:14:39.684412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:45:37.731 [2024-04-18 09:14:39.684518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.731 [2024-04-18 09:14:39.684595] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:45:37.731 [2024-04-18 09:14:39.684689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.731 [2024-04-18 09:14:39.684730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:45:37.731 [2024-04-18 09:14:39.684767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:45:37.731 [2024-04-18 09:14:39.684860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.731 [2024-04-18 09:14:39.730637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.731 [2024-04-18 09:14:39.730842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:37.731 [2024-04-18 09:14:39.731006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.706 ms 00:45:37.731 [2024-04-18 09:14:39.731045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.731 [2024-04-18 09:14:39.731201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.731 [2024-04-18 09:14:39.731312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:37.731 [2024-04-18 09:14:39.731403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:45:37.731 [2024-04-18 09:14:39.731498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.731 [2024-04-18 09:14:39.732806] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 476.088 ms, result 0 00:46:06.459  Copying: 34/1024 [MB] (34 MBps) Copying: 70/1024 [MB] (36 MBps) Copying: 107/1024 [MB] (36 MBps) Copying: 143/1024 [MB] (36 MBps) Copying: 180/1024 [MB] (36 MBps) Copying: 217/1024 [MB] (36 MBps) Copying: 250/1024 [MB] (33 MBps) Copying: 284/1024 [MB] (33 MBps) Copying: 319/1024 [MB] (34 MBps) Copying: 356/1024 [MB] (37 MBps) Copying: 392/1024 [MB] (35 MBps) Copying: 425/1024 [MB] (33 MBps) Copying: 459/1024 [MB] (33 MBps) Copying: 495/1024 [MB] (36 MBps) Copying: 531/1024 [MB] (35 MBps) Copying: 569/1024 [MB] (37 MBps) Copying: 605/1024 [MB] (36 MBps) Copying: 640/1024 [MB] (34 MBps) Copying: 676/1024 [MB] (36 MBps) Copying: 714/1024 [MB] (37 MBps) Copying: 750/1024 [MB] (36 MBps) Copying: 784/1024 [MB] (33 MBps) Copying: 820/1024 [MB] (35 MBps) Copying: 856/1024 [MB] (35 MBps) Copying: 892/1024 [MB] (36 MBps) Copying: 928/1024 [MB] (36 MBps) Copying: 965/1024 [MB] (37 MBps) Copying: 1003/1024 [MB] (37 MBps) Copying: 1024/1024 [MB] (average 35 MBps)[2024-04-18 09:15:08.312302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.459 [2024-04-18 09:15:08.312524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:46:06.459 [2024-04-18 09:15:08.312638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:46:06.459 [2024-04-18 09:15:08.312684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.459 [2024-04-18 09:15:08.312796] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:46:06.459 [2024-04-18 09:15:08.317292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.460 [2024-04-18 09:15:08.317492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:46:06.460 [2024-04-18 09:15:08.317594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.420 ms 00:46:06.460 [2024-04-18 09:15:08.317636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.460 [2024-04-18 09:15:08.320509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.460 [2024-04-18 09:15:08.320685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:46:06.460 [2024-04-18 09:15:08.320794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.728 ms 00:46:06.460 [2024-04-18 09:15:08.320886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.460 [2024-04-18 09:15:08.336336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.460 [2024-04-18 09:15:08.336641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:46:06.460 [2024-04-18 09:15:08.336743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.384 ms 00:46:06.460 [2024-04-18 09:15:08.336787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.460 [2024-04-18 09:15:08.343035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.460 [2024-04-18 09:15:08.343226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:46:06.460 [2024-04-18 09:15:08.343311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.095 ms 00:46:06.460 [2024-04-18 09:15:08.343364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.460 [2024-04-18 09:15:08.391979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.460 [2024-04-18 09:15:08.392247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:46:06.460 [2024-04-18 09:15:08.392345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.445 ms 00:46:06.460 [2024-04-18 09:15:08.392421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.460 [2024-04-18 09:15:08.419481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.460 [2024-04-18 09:15:08.419748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:46:06.460 [2024-04-18 09:15:08.419974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.864 ms 00:46:06.460 [2024-04-18 09:15:08.420022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.460 [2024-04-18 09:15:08.420240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.460 [2024-04-18 09:15:08.420294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:46:06.460 [2024-04-18 09:15:08.420344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:46:06.460 [2024-04-18 09:15:08.420469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.460 [2024-04-18 09:15:08.470773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.460 [2024-04-18 09:15:08.471061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:46:06.460 [2024-04-18 09:15:08.471158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.240 ms 00:46:06.460 [2024-04-18 09:15:08.471197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.460 [2024-04-18 09:15:08.518981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.460 [2024-04-18 09:15:08.519268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:46:06.460 [2024-04-18 09:15:08.519455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.689 ms 00:46:06.460 [2024-04-18 09:15:08.519500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.719 [2024-04-18 09:15:08.566822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.719 [2024-04-18 09:15:08.567077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:46:06.719 [2024-04-18 09:15:08.567244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.124 ms 00:46:06.719 [2024-04-18 09:15:08.567283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.719 [2024-04-18 09:15:08.614661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.719 [2024-04-18 09:15:08.614929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:46:06.719 [2024-04-18 09:15:08.615025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.210 ms 00:46:06.719 [2024-04-18 09:15:08.615066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.719 [2024-04-18 09:15:08.615173] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:46:06.719 [2024-04-18 09:15:08.615280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.615346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.615467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.615529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.615591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.615806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.615868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.615924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.615997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.616060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.616287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.616346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.616421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.616478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.616534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.616733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.616793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.616850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.616906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.616966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.617113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.617172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.617229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.617286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.617342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.617487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.617550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.617607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.617664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.617781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.617843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.617902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.618031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.618090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.618190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.618251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.618439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.618500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.618556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.618612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.618731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.618884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.618946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.619002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.619108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.619172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.619232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.619291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.619415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.619475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.619535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.619684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.619786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.619849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.619965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.620031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.620088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.620191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.620368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.620442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.620498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.620558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.620678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.620741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.620800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.620911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.620974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.621030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.621150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.621209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.621313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.621385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.621569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.621628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.621684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.621740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.621796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.621911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.621971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.622028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.622155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.622215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.622315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.622427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.622493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.622596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.622703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.622856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.622917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.622972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.623026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.623209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.623288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.623344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.623510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.623571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.623628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.623683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.623804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.623870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:46:06.719 [2024-04-18 09:15:08.623941] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:46:06.719 [2024-04-18 09:15:08.624051] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a06ffee-18e7-4484-82dd-d6d34a9070af 00:46:06.719 [2024-04-18 09:15:08.624116] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:46:06.719 [2024-04-18 09:15:08.624151] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:46:06.719 [2024-04-18 09:15:08.624185] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:46:06.719 [2024-04-18 09:15:08.624278] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:46:06.719 [2024-04-18 09:15:08.624331] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:46:06.719 [2024-04-18 09:15:08.624366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:46:06.719 [2024-04-18 09:15:08.624419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:46:06.719 [2024-04-18 09:15:08.624453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:46:06.719 [2024-04-18 09:15:08.624536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:46:06.719 [2024-04-18 09:15:08.624579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.719 [2024-04-18 09:15:08.624615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:46:06.719 [2024-04-18 09:15:08.624654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.407 ms 00:46:06.719 [2024-04-18 09:15:08.624688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.719 [2024-04-18 09:15:08.648807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.719 [2024-04-18 09:15:08.648999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:46:06.719 [2024-04-18 09:15:08.649100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.965 ms 00:46:06.719 [2024-04-18 09:15:08.649203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.719 [2024-04-18 09:15:08.649543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.719 [2024-04-18 09:15:08.649647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:46:06.719 [2024-04-18 09:15:08.649727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:46:06.719 [2024-04-18 09:15:08.649766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.719 [2024-04-18 09:15:08.710947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.719 [2024-04-18 09:15:08.711246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:06.719 [2024-04-18 09:15:08.711422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.719 [2024-04-18 09:15:08.711463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.719 [2024-04-18 09:15:08.711569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.719 [2024-04-18 09:15:08.711659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:06.719 [2024-04-18 09:15:08.711699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.719 [2024-04-18 09:15:08.711730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.719 [2024-04-18 09:15:08.711892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.719 [2024-04-18 09:15:08.712016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:06.719 [2024-04-18 09:15:08.712113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.719 [2024-04-18 09:15:08.712154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.719 [2024-04-18 09:15:08.712207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.719 [2024-04-18 09:15:08.712294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:06.719 [2024-04-18 09:15:08.712330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.720 [2024-04-18 09:15:08.712364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.978 [2024-04-18 09:15:08.850127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.978 [2024-04-18 09:15:08.850438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:06.978 [2024-04-18 09:15:08.850548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.978 [2024-04-18 09:15:08.850592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.978 [2024-04-18 09:15:08.906593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.978 [2024-04-18 09:15:08.906829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:06.978 [2024-04-18 09:15:08.906954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.978 [2024-04-18 09:15:08.907055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.978 [2024-04-18 09:15:08.907163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.978 [2024-04-18 09:15:08.907202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:06.978 [2024-04-18 09:15:08.907280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.978 [2024-04-18 09:15:08.907343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.978 [2024-04-18 09:15:08.907488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.978 [2024-04-18 09:15:08.907533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:06.978 [2024-04-18 09:15:08.907663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.978 [2024-04-18 09:15:08.907703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.978 [2024-04-18 09:15:08.907916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.978 [2024-04-18 09:15:08.908019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:06.978 [2024-04-18 09:15:08.908102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.978 [2024-04-18 09:15:08.908142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.978 [2024-04-18 09:15:08.908267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.978 [2024-04-18 09:15:08.908312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:46:06.978 [2024-04-18 09:15:08.908408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.978 [2024-04-18 09:15:08.908448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.978 [2024-04-18 09:15:08.908516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.978 [2024-04-18 09:15:08.908556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:06.978 [2024-04-18 09:15:08.908591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.978 [2024-04-18 09:15:08.908675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.978 [2024-04-18 09:15:08.908767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.978 [2024-04-18 09:15:08.908809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:06.978 [2024-04-18 09:15:08.908889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.978 [2024-04-18 09:15:08.908930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.978 [2024-04-18 09:15:08.909105] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 596.751 ms, result 0 00:46:08.881 00:46:08.881 00:46:08.881 09:15:10 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:46:08.881 [2024-04-18 09:15:10.696350] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:46:08.881 [2024-04-18 09:15:10.696622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80283 ] 00:46:08.881 [2024-04-18 09:15:10.886901] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:09.139 [2024-04-18 09:15:11.225603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:09.705 [2024-04-18 09:15:11.683484] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:09.705 [2024-04-18 09:15:11.683580] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:09.983 [2024-04-18 09:15:11.847951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.983 [2024-04-18 09:15:11.848039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:46:09.983 [2024-04-18 09:15:11.848074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:46:09.983 [2024-04-18 09:15:11.848091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.983 [2024-04-18 09:15:11.848206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.983 [2024-04-18 09:15:11.848228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:09.983 [2024-04-18 09:15:11.848244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:46:09.983 [2024-04-18 09:15:11.848260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.983 [2024-04-18 09:15:11.848292] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:46:09.983 [2024-04-18 09:15:11.849411] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:46:09.983 [2024-04-18 09:15:11.849448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.983 [2024-04-18 09:15:11.849464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:09.983 [2024-04-18 09:15:11.849480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.162 ms 00:46:09.983 [2024-04-18 09:15:11.849494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.983 [2024-04-18 09:15:11.851178] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:46:09.983 [2024-04-18 09:15:11.872127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.983 [2024-04-18 09:15:11.872206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:46:09.983 [2024-04-18 09:15:11.872237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.954 ms 00:46:09.983 [2024-04-18 09:15:11.872249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.983 [2024-04-18 09:15:11.872380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.983 [2024-04-18 09:15:11.872397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:46:09.983 [2024-04-18 09:15:11.872409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:46:09.983 [2024-04-18 09:15:11.872420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.983 [2024-04-18 09:15:11.880566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.983 [2024-04-18 09:15:11.880621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:09.983 [2024-04-18 09:15:11.880639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.011 ms 00:46:09.983 [2024-04-18 09:15:11.880651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.983 [2024-04-18 09:15:11.880781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.983 [2024-04-18 09:15:11.880799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:09.983 [2024-04-18 09:15:11.880812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:46:09.983 [2024-04-18 09:15:11.880823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.983 [2024-04-18 09:15:11.880881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.983 [2024-04-18 09:15:11.880898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:46:09.983 [2024-04-18 09:15:11.880911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:46:09.983 [2024-04-18 09:15:11.880922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.983 [2024-04-18 09:15:11.880962] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:46:09.983 [2024-04-18 09:15:11.887693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.983 [2024-04-18 09:15:11.887785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:09.983 [2024-04-18 09:15:11.887803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.745 ms 00:46:09.983 [2024-04-18 09:15:11.887815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.983 [2024-04-18 09:15:11.887892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.983 [2024-04-18 09:15:11.887912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:46:09.983 [2024-04-18 09:15:11.887927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:46:09.983 [2024-04-18 09:15:11.887939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.983 [2024-04-18 09:15:11.888045] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:46:09.983 [2024-04-18 09:15:11.888096] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:46:09.983 [2024-04-18 09:15:11.888142] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:46:09.983 [2024-04-18 09:15:11.888163] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:46:09.983 [2024-04-18 09:15:11.888250] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:46:09.983 [2024-04-18 09:15:11.888266] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:46:09.983 [2024-04-18 09:15:11.888281] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:46:09.983 [2024-04-18 09:15:11.888302] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:46:09.983 [2024-04-18 09:15:11.888317] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:46:09.983 [2024-04-18 09:15:11.888334] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:46:09.983 [2024-04-18 09:15:11.888346] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:46:09.983 [2024-04-18 09:15:11.888358] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:46:09.983 [2024-04-18 09:15:11.888369] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:46:09.983 [2024-04-18 09:15:11.888397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.983 [2024-04-18 09:15:11.888409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:46:09.983 [2024-04-18 09:15:11.888421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:46:09.983 [2024-04-18 09:15:11.888433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.983 [2024-04-18 09:15:11.888507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.983 [2024-04-18 09:15:11.888520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:46:09.983 [2024-04-18 09:15:11.888536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:46:09.983 [2024-04-18 09:15:11.888548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.983 [2024-04-18 09:15:11.888630] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:46:09.983 [2024-04-18 09:15:11.888645] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:46:09.983 [2024-04-18 09:15:11.888658] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:09.983 [2024-04-18 09:15:11.888670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:09.983 [2024-04-18 09:15:11.888683] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:46:09.983 [2024-04-18 09:15:11.888695] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:46:09.983 [2024-04-18 09:15:11.888707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:46:09.983 [2024-04-18 09:15:11.888719] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:46:09.983 [2024-04-18 09:15:11.888734] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:46:09.983 [2024-04-18 09:15:11.888746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:09.983 [2024-04-18 09:15:11.888757] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:46:09.983 [2024-04-18 09:15:11.888769] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:46:09.983 [2024-04-18 09:15:11.888794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:09.983 [2024-04-18 09:15:11.888806] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:46:09.983 [2024-04-18 09:15:11.888817] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:46:09.984 [2024-04-18 09:15:11.888829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:09.984 [2024-04-18 09:15:11.888841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:46:09.984 [2024-04-18 09:15:11.888853] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:46:09.984 [2024-04-18 09:15:11.888864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:09.984 [2024-04-18 09:15:11.888875] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:46:09.984 [2024-04-18 09:15:11.888887] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:46:09.984 [2024-04-18 09:15:11.888898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:46:09.984 [2024-04-18 09:15:11.888910] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:46:09.984 [2024-04-18 09:15:11.888922] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:46:09.984 [2024-04-18 09:15:11.888933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:46:09.984 [2024-04-18 09:15:11.888944] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:46:09.984 [2024-04-18 09:15:11.888955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:46:09.984 [2024-04-18 09:15:11.888966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:46:09.984 [2024-04-18 09:15:11.888977] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:46:09.984 [2024-04-18 09:15:11.888988] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:46:09.984 [2024-04-18 09:15:11.889000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:46:09.984 [2024-04-18 09:15:11.889011] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:46:09.984 [2024-04-18 09:15:11.889022] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:46:09.984 [2024-04-18 09:15:11.889045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:46:09.984 [2024-04-18 09:15:11.889056] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:46:09.984 [2024-04-18 09:15:11.889068] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:46:09.984 [2024-04-18 09:15:11.889078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:09.984 [2024-04-18 09:15:11.889089] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:46:09.984 [2024-04-18 09:15:11.889100] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:46:09.984 [2024-04-18 09:15:11.889111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:09.984 [2024-04-18 09:15:11.889123] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:46:09.984 [2024-04-18 09:15:11.889135] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:46:09.984 [2024-04-18 09:15:11.889151] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:09.984 [2024-04-18 09:15:11.889169] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:09.984 [2024-04-18 09:15:11.889181] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:46:09.984 [2024-04-18 09:15:11.889192] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:46:09.984 [2024-04-18 09:15:11.889203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:46:09.984 [2024-04-18 09:15:11.889215] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:46:09.984 [2024-04-18 09:15:11.889225] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:46:09.984 [2024-04-18 09:15:11.889237] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:46:09.984 [2024-04-18 09:15:11.889248] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:46:09.984 [2024-04-18 09:15:11.889262] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:09.984 [2024-04-18 09:15:11.889275] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:46:09.984 [2024-04-18 09:15:11.889287] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:46:09.984 [2024-04-18 09:15:11.889300] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:46:09.984 [2024-04-18 09:15:11.889312] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:46:09.984 [2024-04-18 09:15:11.889324] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:46:09.984 [2024-04-18 09:15:11.889336] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:46:09.984 [2024-04-18 09:15:11.889348] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:46:09.984 [2024-04-18 09:15:11.889360] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:46:09.984 [2024-04-18 09:15:11.889372] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:46:09.984 [2024-04-18 09:15:11.889396] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:46:09.984 [2024-04-18 09:15:11.889409] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:46:09.984 [2024-04-18 09:15:11.889421] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:46:09.984 [2024-04-18 09:15:11.889435] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:46:09.984 [2024-04-18 09:15:11.889447] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:46:09.984 [2024-04-18 09:15:11.889460] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:09.984 [2024-04-18 09:15:11.889473] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:46:09.984 [2024-04-18 09:15:11.889485] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:46:09.984 [2024-04-18 09:15:11.889498] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:46:09.984 [2024-04-18 09:15:11.889511] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:46:09.984 [2024-04-18 09:15:11.889523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.984 [2024-04-18 09:15:11.889537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:46:09.984 [2024-04-18 09:15:11.889549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:46:09.984 [2024-04-18 09:15:11.889560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.984 [2024-04-18 09:15:11.918098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.984 [2024-04-18 09:15:11.918166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:09.984 [2024-04-18 09:15:11.918186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.487 ms 00:46:09.984 [2024-04-18 09:15:11.918217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.984 [2024-04-18 09:15:11.918345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.984 [2024-04-18 09:15:11.918383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:46:09.984 [2024-04-18 09:15:11.918398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:46:09.984 [2024-04-18 09:15:11.918421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.984 [2024-04-18 09:15:11.988814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.984 [2024-04-18 09:15:11.988883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:09.984 [2024-04-18 09:15:11.988903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.308 ms 00:46:09.984 [2024-04-18 09:15:11.988920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.984 [2024-04-18 09:15:11.988992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.984 [2024-04-18 09:15:11.989006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:09.984 [2024-04-18 09:15:11.989020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:46:09.984 [2024-04-18 09:15:11.989031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.984 [2024-04-18 09:15:11.989584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.984 [2024-04-18 09:15:11.989610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:09.984 [2024-04-18 09:15:11.989623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:46:09.984 [2024-04-18 09:15:11.989634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.984 [2024-04-18 09:15:11.989790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.984 [2024-04-18 09:15:11.989811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:09.984 [2024-04-18 09:15:11.989824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:46:09.984 [2024-04-18 09:15:11.989836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.984 [2024-04-18 09:15:12.016400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.984 [2024-04-18 09:15:12.016465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:09.984 [2024-04-18 09:15:12.016486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.526 ms 00:46:09.984 [2024-04-18 09:15:12.016498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.984 [2024-04-18 09:15:12.037857] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:46:09.984 [2024-04-18 09:15:12.037943] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:46:09.984 [2024-04-18 09:15:12.037965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.984 [2024-04-18 09:15:12.037981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:46:09.984 [2024-04-18 09:15:12.038001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.300 ms 00:46:09.984 [2024-04-18 09:15:12.038016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.984 [2024-04-18 09:15:12.073589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:09.984 [2024-04-18 09:15:12.073681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:46:09.984 [2024-04-18 09:15:12.073702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.473 ms 00:46:09.984 [2024-04-18 09:15:12.073716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:10.283 [2024-04-18 09:15:12.097780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:10.283 [2024-04-18 09:15:12.097862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:46:10.283 [2024-04-18 09:15:12.097882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.964 ms 00:46:10.283 [2024-04-18 09:15:12.097909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:10.283 [2024-04-18 09:15:12.121918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:10.283 [2024-04-18 09:15:12.121997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:46:10.283 [2024-04-18 09:15:12.122020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.921 ms 00:46:10.283 [2024-04-18 09:15:12.122034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:10.283 [2024-04-18 09:15:12.122658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:10.283 [2024-04-18 09:15:12.122694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:46:10.283 [2024-04-18 09:15:12.122712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:46:10.283 [2024-04-18 09:15:12.122728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:10.283 [2024-04-18 09:15:12.235030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:10.283 [2024-04-18 09:15:12.235144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:46:10.283 [2024-04-18 09:15:12.235169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.265 ms 00:46:10.283 [2024-04-18 09:15:12.235184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:10.283 [2024-04-18 09:15:12.252780] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:46:10.283 [2024-04-18 09:15:12.256521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:10.283 [2024-04-18 09:15:12.256583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:46:10.283 [2024-04-18 09:15:12.256603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.204 ms 00:46:10.283 [2024-04-18 09:15:12.256615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:10.283 [2024-04-18 09:15:12.256754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:10.283 [2024-04-18 09:15:12.256778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:46:10.283 [2024-04-18 09:15:12.256792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:46:10.283 [2024-04-18 09:15:12.256804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:10.283 [2024-04-18 09:15:12.256885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:10.283 [2024-04-18 09:15:12.256903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:46:10.283 [2024-04-18 09:15:12.256916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:46:10.283 [2024-04-18 09:15:12.256928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:10.283 [2024-04-18 09:15:12.259338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:10.283 [2024-04-18 09:15:12.259398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:46:10.283 [2024-04-18 09:15:12.259418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.382 ms 00:46:10.283 [2024-04-18 09:15:12.259430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:10.283 [2024-04-18 09:15:12.259469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:10.283 [2024-04-18 09:15:12.259482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:46:10.283 [2024-04-18 09:15:12.259494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:46:10.283 [2024-04-18 09:15:12.259505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:10.283 [2024-04-18 09:15:12.259609] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:46:10.283 [2024-04-18 09:15:12.259629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:10.283 [2024-04-18 09:15:12.259641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:46:10.283 [2024-04-18 09:15:12.259653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:46:10.283 [2024-04-18 09:15:12.259669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:10.283 [2024-04-18 09:15:12.307787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:10.283 [2024-04-18 09:15:12.307878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:46:10.283 [2024-04-18 09:15:12.307900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.083 ms 00:46:10.283 [2024-04-18 09:15:12.307913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:10.284 [2024-04-18 09:15:12.308065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:10.284 [2024-04-18 09:15:12.308098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:46:10.284 [2024-04-18 09:15:12.308112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:46:10.284 [2024-04-18 09:15:12.308124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:10.284 [2024-04-18 09:15:12.309597] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 461.119 ms, result 0 00:46:41.647  Copying: 31/1024 [MB] (31 MBps) Copying: 62/1024 [MB] (31 MBps) Copying: 93/1024 [MB] (30 MBps) Copying: 125/1024 [MB] (32 MBps) Copying: 150/1024 [MB] (25 MBps) Copying: 179/1024 [MB] (28 MBps) Copying: 209/1024 [MB] (29 MBps) Copying: 242/1024 [MB] (32 MBps) Copying: 276/1024 [MB] (34 MBps) Copying: 309/1024 [MB] (32 MBps) Copying: 341/1024 [MB] (32 MBps) Copying: 375/1024 [MB] (33 MBps) Copying: 410/1024 [MB] (34 MBps) Copying: 444/1024 [MB] (33 MBps) Copying: 478/1024 [MB] (34 MBps) Copying: 511/1024 [MB] (32 MBps) Copying: 544/1024 [MB] (32 MBps) Copying: 580/1024 [MB] (35 MBps) Copying: 614/1024 [MB] (34 MBps) Copying: 648/1024 [MB] (33 MBps) Copying: 684/1024 [MB] (35 MBps) Copying: 720/1024 [MB] (36 MBps) Copying: 750/1024 [MB] (30 MBps) Copying: 786/1024 [MB] (35 MBps) Copying: 820/1024 [MB] (34 MBps) Copying: 855/1024 [MB] (35 MBps) Copying: 888/1024 [MB] (33 MBps) Copying: 924/1024 [MB] (35 MBps) Copying: 959/1024 [MB] (35 MBps) Copying: 995/1024 [MB] (36 MBps) Copying: 1024/1024 [MB] (average 33 MBps)[2024-04-18 09:15:43.699270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:41.647 [2024-04-18 09:15:43.699745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:46:41.647 [2024-04-18 09:15:43.699997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:46:41.647 [2024-04-18 09:15:43.700218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:41.647 [2024-04-18 09:15:43.700408] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:46:41.647 [2024-04-18 09:15:43.706915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:41.647 [2024-04-18 09:15:43.707081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:46:41.647 [2024-04-18 09:15:43.707174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.152 ms 00:46:41.647 [2024-04-18 09:15:43.707222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:41.647 [2024-04-18 09:15:43.707522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:41.647 [2024-04-18 09:15:43.707578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:46:41.647 [2024-04-18 09:15:43.707613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:46:41.647 [2024-04-18 09:15:43.707717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:41.647 [2024-04-18 09:15:43.710931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:41.647 [2024-04-18 09:15:43.711068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:46:41.647 [2024-04-18 09:15:43.711156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.174 ms 00:46:41.647 [2024-04-18 09:15:43.711195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:41.647 [2024-04-18 09:15:43.717087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:41.647 [2024-04-18 09:15:43.717281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:46:41.647 [2024-04-18 09:15:43.717456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.843 ms 00:46:41.647 [2024-04-18 09:15:43.717504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:41.907 [2024-04-18 09:15:43.759622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:41.907 [2024-04-18 09:15:43.759832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:46:41.907 [2024-04-18 09:15:43.759914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.963 ms 00:46:41.907 [2024-04-18 09:15:43.759953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:41.907 [2024-04-18 09:15:43.782765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:41.907 [2024-04-18 09:15:43.783041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:46:41.907 [2024-04-18 09:15:43.783122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.745 ms 00:46:41.907 [2024-04-18 09:15:43.783158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:41.907 [2024-04-18 09:15:43.783337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:41.907 [2024-04-18 09:15:43.783542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:46:41.907 [2024-04-18 09:15:43.783636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:46:41.907 [2024-04-18 09:15:43.783670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:41.907 [2024-04-18 09:15:43.827574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:41.907 [2024-04-18 09:15:43.827808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:46:41.907 [2024-04-18 09:15:43.827891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.855 ms 00:46:41.907 [2024-04-18 09:15:43.827929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:41.907 [2024-04-18 09:15:43.874770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:41.907 [2024-04-18 09:15:43.875007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:46:41.907 [2024-04-18 09:15:43.875123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.766 ms 00:46:41.907 [2024-04-18 09:15:43.875169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:41.907 [2024-04-18 09:15:43.921147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:41.907 [2024-04-18 09:15:43.921422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:46:41.907 [2024-04-18 09:15:43.921557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.860 ms 00:46:41.907 [2024-04-18 09:15:43.921616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:41.907 [2024-04-18 09:15:43.967988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:41.907 [2024-04-18 09:15:43.968231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:46:41.907 [2024-04-18 09:15:43.968387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.230 ms 00:46:41.907 [2024-04-18 09:15:43.968433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:41.907 [2024-04-18 09:15:43.968509] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:46:41.907 [2024-04-18 09:15:43.968617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.968678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.968733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.968788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.968843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.968978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.969036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.969091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.969212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.969274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.969336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.969504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.969568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.969628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.969750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.969817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.969875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.969991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.970050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.970153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.970321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.970395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.970454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.970565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.970640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.970701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.970812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.970869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.970963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.971157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.971221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.971346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.971436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.971500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.971600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.971665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.971728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.971827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.971892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.971954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.972073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.972141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.972201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.972319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.972393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.972528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.972590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.972691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.972802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.972870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.972975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.973082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.973145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.973244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.973351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.973426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.973526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.973630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.973693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.973788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.973847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.973946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.974005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:46:41.907 [2024-04-18 09:15:43.974101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.974223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.974328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.974448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.974514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.974614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.974675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.974840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.974897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.974952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.975006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.975113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.975175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.975230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.975327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.975397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.975453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.975561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.975624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.975679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.975784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.975846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.975901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.976054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.976114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.976223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.976288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.976437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.976501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.976564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.976672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.976879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.976945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.977008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.977153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.977210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.977269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:46:41.908 [2024-04-18 09:15:43.977340] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:46:41.908 [2024-04-18 09:15:43.977485] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a06ffee-18e7-4484-82dd-d6d34a9070af 00:46:41.908 [2024-04-18 09:15:43.977549] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:46:41.908 [2024-04-18 09:15:43.977584] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:46:41.908 [2024-04-18 09:15:43.977628] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:46:41.908 [2024-04-18 09:15:43.977677] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:46:41.908 [2024-04-18 09:15:43.977766] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:46:41.908 [2024-04-18 09:15:43.977803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:46:41.908 [2024-04-18 09:15:43.977837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:46:41.908 [2024-04-18 09:15:43.977870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:46:41.908 [2024-04-18 09:15:43.977941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:46:41.908 [2024-04-18 09:15:43.978034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:41.908 [2024-04-18 09:15:43.978077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:46:41.908 [2024-04-18 09:15:43.978151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.532 ms 00:46:41.908 [2024-04-18 09:15:43.978193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:41.908 [2024-04-18 09:15:44.001343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:41.908 [2024-04-18 09:15:44.001585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:46:41.908 [2024-04-18 09:15:44.001723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.003 ms 00:46:41.908 [2024-04-18 09:15:44.001764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:41.908 [2024-04-18 09:15:44.002104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:41.908 [2024-04-18 09:15:44.002143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:46:41.908 [2024-04-18 09:15:44.002233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:46:41.908 [2024-04-18 09:15:44.002275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:42.167 [2024-04-18 09:15:44.063655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:42.167 [2024-04-18 09:15:44.063899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:42.167 [2024-04-18 09:15:44.064053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:42.167 [2024-04-18 09:15:44.064095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:42.167 [2024-04-18 09:15:44.064207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:42.167 [2024-04-18 09:15:44.064244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:42.167 [2024-04-18 09:15:44.064336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:42.167 [2024-04-18 09:15:44.064386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:42.167 [2024-04-18 09:15:44.064498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:42.167 [2024-04-18 09:15:44.064628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:42.167 [2024-04-18 09:15:44.064663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:42.167 [2024-04-18 09:15:44.064760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:42.167 [2024-04-18 09:15:44.064813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:42.167 [2024-04-18 09:15:44.064850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:42.167 [2024-04-18 09:15:44.064886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:42.167 [2024-04-18 09:15:44.064981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:42.167 [2024-04-18 09:15:44.202292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:42.167 [2024-04-18 09:15:44.202577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:42.167 [2024-04-18 09:15:44.202672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:42.167 [2024-04-18 09:15:44.202715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:42.167 [2024-04-18 09:15:44.260342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:42.167 [2024-04-18 09:15:44.260622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:42.167 [2024-04-18 09:15:44.260724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:42.168 [2024-04-18 09:15:44.260768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:42.168 [2024-04-18 09:15:44.260870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:42.168 [2024-04-18 09:15:44.260961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:42.168 [2024-04-18 09:15:44.261015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:42.168 [2024-04-18 09:15:44.261061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:42.168 [2024-04-18 09:15:44.261171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:42.168 [2024-04-18 09:15:44.261279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:42.168 [2024-04-18 09:15:44.261354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:42.168 [2024-04-18 09:15:44.261411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:42.168 [2024-04-18 09:15:44.261560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:42.168 [2024-04-18 09:15:44.261603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:42.168 [2024-04-18 09:15:44.261685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:42.168 [2024-04-18 09:15:44.261730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:42.168 [2024-04-18 09:15:44.261817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:42.168 [2024-04-18 09:15:44.261903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:46:42.168 [2024-04-18 09:15:44.261940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:42.168 [2024-04-18 09:15:44.261972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:42.168 [2024-04-18 09:15:44.262066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:42.168 [2024-04-18 09:15:44.262146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:42.168 [2024-04-18 09:15:44.262216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:42.168 [2024-04-18 09:15:44.262258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:42.168 [2024-04-18 09:15:44.262330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:42.168 [2024-04-18 09:15:44.262476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:42.168 [2024-04-18 09:15:44.262515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:42.168 [2024-04-18 09:15:44.262547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:42.168 [2024-04-18 09:15:44.262714] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 563.440 ms, result 0 00:46:44.068 00:46:44.068 00:46:44.068 09:15:45 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:46:45.970 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:46:45.970 09:15:47 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:46:45.970 [2024-04-18 09:15:47.777082] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:46:45.970 [2024-04-18 09:15:47.777271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80659 ] 00:46:45.970 [2024-04-18 09:15:47.971880] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:46.229 [2024-04-18 09:15:48.273094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:46.796 [2024-04-18 09:15:48.719727] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:46.796 [2024-04-18 09:15:48.719821] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:46.796 [2024-04-18 09:15:48.880688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:46.796 [2024-04-18 09:15:48.880756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:46:46.796 [2024-04-18 09:15:48.880775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:46:46.796 [2024-04-18 09:15:48.880787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:46.796 [2024-04-18 09:15:48.880870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:46.796 [2024-04-18 09:15:48.880887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:46.796 [2024-04-18 09:15:48.880899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:46:46.796 [2024-04-18 09:15:48.880911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:46.796 [2024-04-18 09:15:48.880937] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:46:46.796 [2024-04-18 09:15:48.882336] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:46:46.796 [2024-04-18 09:15:48.882390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:46.796 [2024-04-18 09:15:48.882407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:46.796 [2024-04-18 09:15:48.882424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.458 ms 00:46:46.796 [2024-04-18 09:15:48.882438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:46.796 [2024-04-18 09:15:48.884067] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:46:47.055 [2024-04-18 09:15:48.907868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.055 [2024-04-18 09:15:48.907931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:46:47.055 [2024-04-18 09:15:48.907956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.800 ms 00:46:47.055 [2024-04-18 09:15:48.907968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.055 [2024-04-18 09:15:48.908101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.055 [2024-04-18 09:15:48.908116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:46:47.055 [2024-04-18 09:15:48.908129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:46:47.055 [2024-04-18 09:15:48.908141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.055 [2024-04-18 09:15:48.915849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.055 [2024-04-18 09:15:48.915894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:47.055 [2024-04-18 09:15:48.915908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.600 ms 00:46:47.055 [2024-04-18 09:15:48.915919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.055 [2024-04-18 09:15:48.916060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.055 [2024-04-18 09:15:48.916078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:47.055 [2024-04-18 09:15:48.916091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:46:47.055 [2024-04-18 09:15:48.916103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.055 [2024-04-18 09:15:48.916157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.055 [2024-04-18 09:15:48.916175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:46:47.055 [2024-04-18 09:15:48.916189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:46:47.055 [2024-04-18 09:15:48.916200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.055 [2024-04-18 09:15:48.916232] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:46:47.055 [2024-04-18 09:15:48.922730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.055 [2024-04-18 09:15:48.922772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:47.055 [2024-04-18 09:15:48.922785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.505 ms 00:46:47.055 [2024-04-18 09:15:48.922798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.055 [2024-04-18 09:15:48.922837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.055 [2024-04-18 09:15:48.922849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:46:47.055 [2024-04-18 09:15:48.922861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:46:47.055 [2024-04-18 09:15:48.922883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.055 [2024-04-18 09:15:48.922961] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:46:47.055 [2024-04-18 09:15:48.922992] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:46:47.056 [2024-04-18 09:15:48.923032] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:46:47.056 [2024-04-18 09:15:48.923056] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:46:47.056 [2024-04-18 09:15:48.923133] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:46:47.056 [2024-04-18 09:15:48.923155] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:46:47.056 [2024-04-18 09:15:48.923170] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:46:47.056 [2024-04-18 09:15:48.923185] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:46:47.056 [2024-04-18 09:15:48.923198] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:46:47.056 [2024-04-18 09:15:48.923215] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:46:47.056 [2024-04-18 09:15:48.923226] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:46:47.056 [2024-04-18 09:15:48.923237] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:46:47.056 [2024-04-18 09:15:48.923248] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:46:47.056 [2024-04-18 09:15:48.923260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.056 [2024-04-18 09:15:48.923271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:46:47.056 [2024-04-18 09:15:48.923283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:46:47.056 [2024-04-18 09:15:48.923294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.056 [2024-04-18 09:15:48.923363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.056 [2024-04-18 09:15:48.923390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:46:47.056 [2024-04-18 09:15:48.923405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:46:47.056 [2024-04-18 09:15:48.923416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.056 [2024-04-18 09:15:48.923494] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:46:47.056 [2024-04-18 09:15:48.923514] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:46:47.056 [2024-04-18 09:15:48.923526] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:47.056 [2024-04-18 09:15:48.923538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:47.056 [2024-04-18 09:15:48.923549] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:46:47.056 [2024-04-18 09:15:48.923560] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:46:47.056 [2024-04-18 09:15:48.923571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:46:47.056 [2024-04-18 09:15:48.923582] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:46:47.056 [2024-04-18 09:15:48.923593] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:46:47.056 [2024-04-18 09:15:48.923603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:47.056 [2024-04-18 09:15:48.923614] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:46:47.056 [2024-04-18 09:15:48.923625] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:46:47.056 [2024-04-18 09:15:48.923649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:47.056 [2024-04-18 09:15:48.923660] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:46:47.056 [2024-04-18 09:15:48.923671] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:46:47.056 [2024-04-18 09:15:48.923682] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:47.056 [2024-04-18 09:15:48.923693] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:46:47.056 [2024-04-18 09:15:48.923704] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:46:47.056 [2024-04-18 09:15:48.923714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:47.056 [2024-04-18 09:15:48.923726] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:46:47.056 [2024-04-18 09:15:48.923736] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:46:47.056 [2024-04-18 09:15:48.923747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:46:47.056 [2024-04-18 09:15:48.923758] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:46:47.056 [2024-04-18 09:15:48.923769] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:46:47.056 [2024-04-18 09:15:48.923779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:46:47.056 [2024-04-18 09:15:48.923790] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:46:47.056 [2024-04-18 09:15:48.923801] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:46:47.056 [2024-04-18 09:15:48.923812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:46:47.056 [2024-04-18 09:15:48.923822] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:46:47.056 [2024-04-18 09:15:48.923833] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:46:47.056 [2024-04-18 09:15:48.923843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:46:47.056 [2024-04-18 09:15:48.923854] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:46:47.056 [2024-04-18 09:15:48.923864] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:46:47.056 [2024-04-18 09:15:48.923875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:46:47.056 [2024-04-18 09:15:48.923885] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:46:47.056 [2024-04-18 09:15:48.923897] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:46:47.056 [2024-04-18 09:15:48.923907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:47.056 [2024-04-18 09:15:48.923918] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:46:47.056 [2024-04-18 09:15:48.923928] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:46:47.056 [2024-04-18 09:15:48.923939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:47.056 [2024-04-18 09:15:48.923949] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:46:47.056 [2024-04-18 09:15:48.923960] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:46:47.056 [2024-04-18 09:15:48.923987] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:47.056 [2024-04-18 09:15:48.924004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:47.056 [2024-04-18 09:15:48.924016] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:46:47.056 [2024-04-18 09:15:48.924028] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:46:47.056 [2024-04-18 09:15:48.924038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:46:47.056 [2024-04-18 09:15:48.924049] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:46:47.056 [2024-04-18 09:15:48.924060] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:46:47.056 [2024-04-18 09:15:48.924071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:46:47.056 [2024-04-18 09:15:48.924092] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:46:47.056 [2024-04-18 09:15:48.924106] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:47.056 [2024-04-18 09:15:48.924122] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:46:47.056 [2024-04-18 09:15:48.924136] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:46:47.056 [2024-04-18 09:15:48.924148] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:46:47.056 [2024-04-18 09:15:48.924160] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:46:47.056 [2024-04-18 09:15:48.924172] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:46:47.056 [2024-04-18 09:15:48.924185] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:46:47.056 [2024-04-18 09:15:48.924196] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:46:47.056 [2024-04-18 09:15:48.924208] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:46:47.056 [2024-04-18 09:15:48.924220] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:46:47.056 [2024-04-18 09:15:48.924232] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:46:47.056 [2024-04-18 09:15:48.924244] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:46:47.056 [2024-04-18 09:15:48.924256] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:46:47.056 [2024-04-18 09:15:48.924268] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:46:47.056 [2024-04-18 09:15:48.924280] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:46:47.056 [2024-04-18 09:15:48.924292] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:47.056 [2024-04-18 09:15:48.924305] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:46:47.056 [2024-04-18 09:15:48.924317] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:46:47.056 [2024-04-18 09:15:48.924329] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:46:47.056 [2024-04-18 09:15:48.924341] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:46:47.056 [2024-04-18 09:15:48.924353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.056 [2024-04-18 09:15:48.924365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:46:47.056 [2024-04-18 09:15:48.924388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:46:47.056 [2024-04-18 09:15:48.924399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.056 [2024-04-18 09:15:48.952167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.056 [2024-04-18 09:15:48.952219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:47.056 [2024-04-18 09:15:48.952238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.718 ms 00:46:47.056 [2024-04-18 09:15:48.952252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.056 [2024-04-18 09:15:48.952360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.056 [2024-04-18 09:15:48.952388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:46:47.056 [2024-04-18 09:15:48.952401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:46:47.056 [2024-04-18 09:15:48.952413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.056 [2024-04-18 09:15:49.023457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.056 [2024-04-18 09:15:49.023516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:47.056 [2024-04-18 09:15:49.023534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.962 ms 00:46:47.056 [2024-04-18 09:15:49.023549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.056 [2024-04-18 09:15:49.023614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.056 [2024-04-18 09:15:49.023626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:47.056 [2024-04-18 09:15:49.023638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:46:47.056 [2024-04-18 09:15:49.023649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.056 [2024-04-18 09:15:49.024162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.056 [2024-04-18 09:15:49.024186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:47.056 [2024-04-18 09:15:49.024199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:46:47.056 [2024-04-18 09:15:49.024210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.056 [2024-04-18 09:15:49.024338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.056 [2024-04-18 09:15:49.024355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:47.056 [2024-04-18 09:15:49.024367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:46:47.056 [2024-04-18 09:15:49.024392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.056 [2024-04-18 09:15:49.049102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.056 [2024-04-18 09:15:49.049160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:47.056 [2024-04-18 09:15:49.049177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.684 ms 00:46:47.056 [2024-04-18 09:15:49.049189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.056 [2024-04-18 09:15:49.071785] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:46:47.056 [2024-04-18 09:15:49.071841] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:46:47.056 [2024-04-18 09:15:49.071857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.056 [2024-04-18 09:15:49.071869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:46:47.056 [2024-04-18 09:15:49.071884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.512 ms 00:46:47.056 [2024-04-18 09:15:49.071895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.056 [2024-04-18 09:15:49.106168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.056 [2024-04-18 09:15:49.106237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:46:47.056 [2024-04-18 09:15:49.106256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.214 ms 00:46:47.056 [2024-04-18 09:15:49.106269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.056 [2024-04-18 09:15:49.130270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.056 [2024-04-18 09:15:49.130329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:46:47.056 [2024-04-18 09:15:49.130347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.929 ms 00:46:47.056 [2024-04-18 09:15:49.130383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.056 [2024-04-18 09:15:49.152964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.056 [2024-04-18 09:15:49.153055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:46:47.056 [2024-04-18 09:15:49.153072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.523 ms 00:46:47.056 [2024-04-18 09:15:49.153084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.056 [2024-04-18 09:15:49.153704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.056 [2024-04-18 09:15:49.153733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:46:47.056 [2024-04-18 09:15:49.153747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:46:47.056 [2024-04-18 09:15:49.153758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.315 [2024-04-18 09:15:49.263904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.315 [2024-04-18 09:15:49.263986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:46:47.315 [2024-04-18 09:15:49.264021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.121 ms 00:46:47.315 [2024-04-18 09:15:49.264034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.315 [2024-04-18 09:15:49.280358] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:46:47.315 [2024-04-18 09:15:49.283946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.315 [2024-04-18 09:15:49.284043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:46:47.315 [2024-04-18 09:15:49.284061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.832 ms 00:46:47.315 [2024-04-18 09:15:49.284073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.315 [2024-04-18 09:15:49.284194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.315 [2024-04-18 09:15:49.284216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:46:47.315 [2024-04-18 09:15:49.284229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:46:47.315 [2024-04-18 09:15:49.284241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.315 [2024-04-18 09:15:49.284321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.315 [2024-04-18 09:15:49.284335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:46:47.315 [2024-04-18 09:15:49.284348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:46:47.315 [2024-04-18 09:15:49.284359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.315 [2024-04-18 09:15:49.286842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.315 [2024-04-18 09:15:49.286874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:46:47.315 [2024-04-18 09:15:49.286889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.446 ms 00:46:47.315 [2024-04-18 09:15:49.286899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.315 [2024-04-18 09:15:49.286927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.315 [2024-04-18 09:15:49.286937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:46:47.315 [2024-04-18 09:15:49.286949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:46:47.315 [2024-04-18 09:15:49.286958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.315 [2024-04-18 09:15:49.286994] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:46:47.315 [2024-04-18 09:15:49.287005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.315 [2024-04-18 09:15:49.287015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:46:47.315 [2024-04-18 09:15:49.287042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:46:47.315 [2024-04-18 09:15:49.287055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.315 [2024-04-18 09:15:49.330848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.315 [2024-04-18 09:15:49.330912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:46:47.315 [2024-04-18 09:15:49.330929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.768 ms 00:46:47.315 [2024-04-18 09:15:49.330957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.315 [2024-04-18 09:15:49.331055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:47.315 [2024-04-18 09:15:49.331075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:46:47.315 [2024-04-18 09:15:49.331088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:46:47.315 [2024-04-18 09:15:49.331100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:47.315 [2024-04-18 09:15:49.332390] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 451.176 ms, result 0 00:47:17.659  Copying: 35/1024 [MB] (35 MBps) Copying: 70/1024 [MB] (34 MBps) Copying: 105/1024 [MB] (35 MBps) Copying: 138/1024 [MB] (33 MBps) Copying: 173/1024 [MB] (35 MBps) Copying: 207/1024 [MB] (33 MBps) Copying: 241/1024 [MB] (34 MBps) Copying: 277/1024 [MB] (36 MBps) Copying: 313/1024 [MB] (35 MBps) Copying: 348/1024 [MB] (35 MBps) Copying: 383/1024 [MB] (35 MBps) Copying: 418/1024 [MB] (35 MBps) Copying: 453/1024 [MB] (34 MBps) Copying: 488/1024 [MB] (34 MBps) Copying: 523/1024 [MB] (34 MBps) Copying: 557/1024 [MB] (34 MBps) Copying: 590/1024 [MB] (32 MBps) Copying: 626/1024 [MB] (35 MBps) Copying: 661/1024 [MB] (35 MBps) Copying: 697/1024 [MB] (36 MBps) Copying: 733/1024 [MB] (35 MBps) Copying: 769/1024 [MB] (36 MBps) Copying: 805/1024 [MB] (35 MBps) Copying: 840/1024 [MB] (35 MBps) Copying: 872/1024 [MB] (32 MBps) Copying: 907/1024 [MB] (34 MBps) Copying: 943/1024 [MB] (35 MBps) Copying: 978/1024 [MB] (35 MBps) Copying: 1013/1024 [MB] (35 MBps) Copying: 1023/1024 [MB] (10 MBps) Copying: 1024/1024 [MB] (average 33 MBps)[2024-04-18 09:16:19.711608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:17.659 [2024-04-18 09:16:19.711860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:47:17.659 [2024-04-18 09:16:19.712023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:47:17.659 [2024-04-18 09:16:19.712074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:17.659 [2024-04-18 09:16:19.713128] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:47:17.659 [2024-04-18 09:16:19.719139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:17.659 [2024-04-18 09:16:19.719311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:47:17.659 [2024-04-18 09:16:19.719433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.827 ms 00:47:17.659 [2024-04-18 09:16:19.719555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:17.659 [2024-04-18 09:16:19.733566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:17.659 [2024-04-18 09:16:19.733835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:47:17.659 [2024-04-18 09:16:19.733929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.492 ms 00:47:17.659 [2024-04-18 09:16:19.733968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:17.659 [2024-04-18 09:16:19.756114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:17.659 [2024-04-18 09:16:19.756388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:47:17.659 [2024-04-18 09:16:19.756513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.042 ms 00:47:17.659 [2024-04-18 09:16:19.756557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:17.918 [2024-04-18 09:16:19.762564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:17.918 [2024-04-18 09:16:19.762772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:47:17.918 [2024-04-18 09:16:19.762860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.887 ms 00:47:17.918 [2024-04-18 09:16:19.762901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:17.918 [2024-04-18 09:16:19.807911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:17.918 [2024-04-18 09:16:19.808197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:47:17.918 [2024-04-18 09:16:19.808285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.856 ms 00:47:17.918 [2024-04-18 09:16:19.808325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:17.918 [2024-04-18 09:16:19.832925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:17.918 [2024-04-18 09:16:19.833193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:47:17.918 [2024-04-18 09:16:19.833332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.503 ms 00:47:17.918 [2024-04-18 09:16:19.833404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:17.919 [2024-04-18 09:16:19.918941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:17.919 [2024-04-18 09:16:19.919190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:47:17.919 [2024-04-18 09:16:19.919283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.438 ms 00:47:17.919 [2024-04-18 09:16:19.919322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:17.919 [2024-04-18 09:16:19.962150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:17.919 [2024-04-18 09:16:19.962414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:47:17.919 [2024-04-18 09:16:19.962546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.746 ms 00:47:17.919 [2024-04-18 09:16:19.962583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:17.919 [2024-04-18 09:16:20.005780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:17.919 [2024-04-18 09:16:20.006044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:47:17.919 [2024-04-18 09:16:20.006140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.115 ms 00:47:17.919 [2024-04-18 09:16:20.006178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:18.178 [2024-04-18 09:16:20.047341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:18.178 [2024-04-18 09:16:20.047624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:47:18.178 [2024-04-18 09:16:20.047706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.080 ms 00:47:18.178 [2024-04-18 09:16:20.047744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:18.178 [2024-04-18 09:16:20.089740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:18.178 [2024-04-18 09:16:20.089962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:47:18.178 [2024-04-18 09:16:20.090071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.821 ms 00:47:18.178 [2024-04-18 09:16:20.090107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:18.178 [2024-04-18 09:16:20.090212] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:47:18.178 [2024-04-18 09:16:20.090284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 118784 / 261120 wr_cnt: 1 state: open 00:47:18.178 [2024-04-18 09:16:20.090415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.090472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.090524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.090679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.090730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.090822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.090874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.090970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.091026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.091078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.091236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.091326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.091390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.091485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.091553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.091601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.091763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.091859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.091908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.091956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.092068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.092122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.092172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.092267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.092399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.092498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.092594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.092650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.092732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.092786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.092951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.093002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.093053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.093104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.093155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.093309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.093360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.093424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.093476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.093526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.093661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.093718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.093817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.093870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.093968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.094021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.094104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.094201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.094257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.094342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.094409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.094578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.094625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.094671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.094717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.094763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.094862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.094912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.094958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.095005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.095098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.095147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.095194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.095240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.095352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.095418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.095470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.095521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.095621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.095673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.095724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.095775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.095887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.095938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.096000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.096097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.096196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.096277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.096333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.096395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.096550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.096601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.096652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.096703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.096754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.096905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.096956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.097007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.097058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:47:18.179 [2024-04-18 09:16:20.097162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:47:18.180 [2024-04-18 09:16:20.097215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:47:18.180 [2024-04-18 09:16:20.097266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:47:18.180 [2024-04-18 09:16:20.097371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:47:18.180 [2024-04-18 09:16:20.097436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:47:18.180 [2024-04-18 09:16:20.097484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:47:18.180 [2024-04-18 09:16:20.097531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:47:18.180 [2024-04-18 09:16:20.097688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:47:18.180 [2024-04-18 09:16:20.097739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:47:18.180 [2024-04-18 09:16:20.097822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:47:18.180 [2024-04-18 09:16:20.097935] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:47:18.180 [2024-04-18 09:16:20.097973] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a06ffee-18e7-4484-82dd-d6d34a9070af 00:47:18.180 [2024-04-18 09:16:20.098060] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 118784 00:47:18.180 [2024-04-18 09:16:20.098094] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 119744 00:47:18.180 [2024-04-18 09:16:20.098126] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 118784 00:47:18.180 [2024-04-18 09:16:20.098227] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:47:18.180 [2024-04-18 09:16:20.098293] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:47:18.180 [2024-04-18 09:16:20.098325] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:47:18.180 [2024-04-18 09:16:20.098356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:47:18.180 [2024-04-18 09:16:20.098399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:47:18.180 [2024-04-18 09:16:20.098432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:47:18.180 [2024-04-18 09:16:20.098465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:18.180 [2024-04-18 09:16:20.098497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:47:18.180 [2024-04-18 09:16:20.098630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.254 ms 00:47:18.180 [2024-04-18 09:16:20.098667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:18.180 [2024-04-18 09:16:20.121343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:18.180 [2024-04-18 09:16:20.121667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:47:18.180 [2024-04-18 09:16:20.121818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.588 ms 00:47:18.180 [2024-04-18 09:16:20.121858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:18.180 [2024-04-18 09:16:20.122171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:18.180 [2024-04-18 09:16:20.122215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:47:18.180 [2024-04-18 09:16:20.122397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:47:18.180 [2024-04-18 09:16:20.122439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:18.180 [2024-04-18 09:16:20.184876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:18.180 [2024-04-18 09:16:20.185137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:18.180 [2024-04-18 09:16:20.185258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:18.180 [2024-04-18 09:16:20.185301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:18.180 [2024-04-18 09:16:20.185433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:18.180 [2024-04-18 09:16:20.185510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:18.180 [2024-04-18 09:16:20.185566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:18.180 [2024-04-18 09:16:20.185599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:18.180 [2024-04-18 09:16:20.185721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:18.180 [2024-04-18 09:16:20.185764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:18.180 [2024-04-18 09:16:20.185896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:18.180 [2024-04-18 09:16:20.185939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:18.180 [2024-04-18 09:16:20.185989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:18.180 [2024-04-18 09:16:20.186034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:18.180 [2024-04-18 09:16:20.186070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:18.180 [2024-04-18 09:16:20.186104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:18.439 [2024-04-18 09:16:20.320228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:18.439 [2024-04-18 09:16:20.320521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:18.439 [2024-04-18 09:16:20.320662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:18.439 [2024-04-18 09:16:20.320707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:18.439 [2024-04-18 09:16:20.375112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:18.439 [2024-04-18 09:16:20.375354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:18.439 [2024-04-18 09:16:20.375478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:18.439 [2024-04-18 09:16:20.375522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:18.439 [2024-04-18 09:16:20.375623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:18.439 [2024-04-18 09:16:20.375763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:18.439 [2024-04-18 09:16:20.375817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:18.439 [2024-04-18 09:16:20.375850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:18.439 [2024-04-18 09:16:20.375918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:18.439 [2024-04-18 09:16:20.375955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:18.439 [2024-04-18 09:16:20.376012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:18.439 [2024-04-18 09:16:20.376047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:18.439 [2024-04-18 09:16:20.376278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:18.439 [2024-04-18 09:16:20.376335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:18.439 [2024-04-18 09:16:20.376385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:18.439 [2024-04-18 09:16:20.376469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:18.439 [2024-04-18 09:16:20.376546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:18.439 [2024-04-18 09:16:20.376642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:47:18.439 [2024-04-18 09:16:20.376751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:18.439 [2024-04-18 09:16:20.376800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:18.439 [2024-04-18 09:16:20.376941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:18.439 [2024-04-18 09:16:20.376995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:18.439 [2024-04-18 09:16:20.377160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:18.439 [2024-04-18 09:16:20.377202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:18.439 [2024-04-18 09:16:20.377280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:18.439 [2024-04-18 09:16:20.377323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:18.439 [2024-04-18 09:16:20.377367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:18.439 [2024-04-18 09:16:20.377466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:18.439 [2024-04-18 09:16:20.377633] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 668.791 ms, result 0 00:47:20.346 00:47:20.346 00:47:20.346 09:16:22 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:47:20.346 [2024-04-18 09:16:22.257959] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:47:20.347 [2024-04-18 09:16:22.258572] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81000 ] 00:47:20.347 [2024-04-18 09:16:22.435318] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:20.605 [2024-04-18 09:16:22.698717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:21.190 [2024-04-18 09:16:23.141792] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:21.190 [2024-04-18 09:16:23.142405] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:21.450 [2024-04-18 09:16:23.303529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.450 [2024-04-18 09:16:23.303801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:47:21.450 [2024-04-18 09:16:23.303939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:47:21.450 [2024-04-18 09:16:23.303988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.450 [2024-04-18 09:16:23.304153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.450 [2024-04-18 09:16:23.304291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:21.450 [2024-04-18 09:16:23.304366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:47:21.450 [2024-04-18 09:16:23.304419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.450 [2024-04-18 09:16:23.304476] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:47:21.450 [2024-04-18 09:16:23.305963] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:47:21.450 [2024-04-18 09:16:23.306134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.450 [2024-04-18 09:16:23.306221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:21.450 [2024-04-18 09:16:23.306309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.663 ms 00:47:21.450 [2024-04-18 09:16:23.306348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.450 [2024-04-18 09:16:23.308082] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:47:21.450 [2024-04-18 09:16:23.329889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.450 [2024-04-18 09:16:23.330084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:47:21.450 [2024-04-18 09:16:23.330184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.806 ms 00:47:21.450 [2024-04-18 09:16:23.330224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.450 [2024-04-18 09:16:23.330321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.450 [2024-04-18 09:16:23.330434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:47:21.450 [2024-04-18 09:16:23.330476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:47:21.450 [2024-04-18 09:16:23.330509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.450 [2024-04-18 09:16:23.338042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.450 [2024-04-18 09:16:23.338218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:21.450 [2024-04-18 09:16:23.338369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.350 ms 00:47:21.450 [2024-04-18 09:16:23.338421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.450 [2024-04-18 09:16:23.338554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.450 [2024-04-18 09:16:23.338593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:21.450 [2024-04-18 09:16:23.338626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:47:21.450 [2024-04-18 09:16:23.338701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.450 [2024-04-18 09:16:23.338790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.450 [2024-04-18 09:16:23.338885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:47:21.450 [2024-04-18 09:16:23.338955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:47:21.450 [2024-04-18 09:16:23.338986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.450 [2024-04-18 09:16:23.339039] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:47:21.450 [2024-04-18 09:16:23.345421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.450 [2024-04-18 09:16:23.345546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:21.450 [2024-04-18 09:16:23.345636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.389 ms 00:47:21.450 [2024-04-18 09:16:23.345674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.450 [2024-04-18 09:16:23.345736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.450 [2024-04-18 09:16:23.345772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:47:21.450 [2024-04-18 09:16:23.345805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:47:21.450 [2024-04-18 09:16:23.345885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.450 [2024-04-18 09:16:23.345973] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:47:21.450 [2024-04-18 09:16:23.346033] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:47:21.450 [2024-04-18 09:16:23.346163] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:47:21.450 [2024-04-18 09:16:23.346227] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:47:21.450 [2024-04-18 09:16:23.346341] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:47:21.450 [2024-04-18 09:16:23.346440] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:47:21.450 [2024-04-18 09:16:23.346535] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:47:21.450 [2024-04-18 09:16:23.346595] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:47:21.450 [2024-04-18 09:16:23.346648] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:47:21.450 [2024-04-18 09:16:23.346812] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:47:21.450 [2024-04-18 09:16:23.346843] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:47:21.450 [2024-04-18 09:16:23.346875] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:47:21.450 [2024-04-18 09:16:23.346906] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:47:21.450 [2024-04-18 09:16:23.346939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.450 [2024-04-18 09:16:23.346973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:47:21.450 [2024-04-18 09:16:23.347061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:47:21.450 [2024-04-18 09:16:23.347101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.450 [2024-04-18 09:16:23.347198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.450 [2024-04-18 09:16:23.347281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:47:21.450 [2024-04-18 09:16:23.347323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:47:21.450 [2024-04-18 09:16:23.347355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.450 [2024-04-18 09:16:23.347511] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:47:21.450 [2024-04-18 09:16:23.347611] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:47:21.450 [2024-04-18 09:16:23.347647] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:21.450 [2024-04-18 09:16:23.347710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:21.450 [2024-04-18 09:16:23.347744] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:47:21.450 [2024-04-18 09:16:23.347832] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:47:21.450 [2024-04-18 09:16:23.347898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:47:21.450 [2024-04-18 09:16:23.347927] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:47:21.450 [2024-04-18 09:16:23.347955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:47:21.450 [2024-04-18 09:16:23.347994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:21.450 [2024-04-18 09:16:23.348044] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:47:21.450 [2024-04-18 09:16:23.348076] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:47:21.450 [2024-04-18 09:16:23.348199] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:21.450 [2024-04-18 09:16:23.348261] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:47:21.450 [2024-04-18 09:16:23.348293] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:47:21.450 [2024-04-18 09:16:23.348324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:21.450 [2024-04-18 09:16:23.348355] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:47:21.450 [2024-04-18 09:16:23.348398] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:47:21.450 [2024-04-18 09:16:23.348430] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:21.450 [2024-04-18 09:16:23.348461] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:47:21.450 [2024-04-18 09:16:23.348549] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:47:21.450 [2024-04-18 09:16:23.348587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:47:21.450 [2024-04-18 09:16:23.348619] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:47:21.450 [2024-04-18 09:16:23.348651] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:47:21.450 [2024-04-18 09:16:23.348681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:47:21.450 [2024-04-18 09:16:23.348712] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:47:21.450 [2024-04-18 09:16:23.348798] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:47:21.450 [2024-04-18 09:16:23.348829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:47:21.450 [2024-04-18 09:16:23.348860] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:47:21.450 [2024-04-18 09:16:23.348890] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:47:21.450 [2024-04-18 09:16:23.348921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:47:21.450 [2024-04-18 09:16:23.348990] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:47:21.450 [2024-04-18 09:16:23.349069] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:47:21.450 [2024-04-18 09:16:23.349166] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:47:21.450 [2024-04-18 09:16:23.349200] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:47:21.450 [2024-04-18 09:16:23.349229] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:47:21.450 [2024-04-18 09:16:23.349257] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:21.450 [2024-04-18 09:16:23.349315] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:47:21.450 [2024-04-18 09:16:23.349345] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:47:21.450 [2024-04-18 09:16:23.349383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:21.450 [2024-04-18 09:16:23.349416] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:47:21.451 [2024-04-18 09:16:23.349445] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:47:21.451 [2024-04-18 09:16:23.349509] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:21.451 [2024-04-18 09:16:23.349622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:21.451 [2024-04-18 09:16:23.349658] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:47:21.451 [2024-04-18 09:16:23.349687] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:47:21.451 [2024-04-18 09:16:23.349775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:47:21.451 [2024-04-18 09:16:23.349810] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:47:21.451 [2024-04-18 09:16:23.349839] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:47:21.451 [2024-04-18 09:16:23.349907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:47:21.451 [2024-04-18 09:16:23.349940] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:47:21.451 [2024-04-18 09:16:23.349990] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:21.451 [2024-04-18 09:16:23.350078] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:47:21.451 [2024-04-18 09:16:23.350127] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:47:21.451 [2024-04-18 09:16:23.350209] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:47:21.451 [2024-04-18 09:16:23.350259] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:47:21.451 [2024-04-18 09:16:23.350306] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:47:21.451 [2024-04-18 09:16:23.350411] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:47:21.451 [2024-04-18 09:16:23.350458] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:47:21.451 [2024-04-18 09:16:23.350594] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:47:21.451 [2024-04-18 09:16:23.350644] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:47:21.451 [2024-04-18 09:16:23.350722] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:47:21.451 [2024-04-18 09:16:23.350814] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:47:21.451 [2024-04-18 09:16:23.350903] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:47:21.451 [2024-04-18 09:16:23.350954] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:47:21.451 [2024-04-18 09:16:23.351083] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:47:21.451 [2024-04-18 09:16:23.351170] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:21.451 [2024-04-18 09:16:23.351253] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:47:21.451 [2024-04-18 09:16:23.351360] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:47:21.451 [2024-04-18 09:16:23.351433] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:47:21.451 [2024-04-18 09:16:23.351560] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:47:21.451 [2024-04-18 09:16:23.351611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.451 [2024-04-18 09:16:23.351678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:47:21.451 [2024-04-18 09:16:23.351709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.137 ms 00:47:21.451 [2024-04-18 09:16:23.351739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.451 [2024-04-18 09:16:23.378055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.451 [2024-04-18 09:16:23.378284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:21.451 [2024-04-18 09:16:23.378450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.231 ms 00:47:21.451 [2024-04-18 09:16:23.378495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.451 [2024-04-18 09:16:23.378621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.451 [2024-04-18 09:16:23.378686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:47:21.451 [2024-04-18 09:16:23.378760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:47:21.451 [2024-04-18 09:16:23.378793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.451 [2024-04-18 09:16:23.450157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.451 [2024-04-18 09:16:23.450422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:21.451 [2024-04-18 09:16:23.450551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.266 ms 00:47:21.451 [2024-04-18 09:16:23.450610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.451 [2024-04-18 09:16:23.450751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.451 [2024-04-18 09:16:23.450795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:21.451 [2024-04-18 09:16:23.450831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:47:21.451 [2024-04-18 09:16:23.450923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.451 [2024-04-18 09:16:23.451579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.451 [2024-04-18 09:16:23.451715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:21.451 [2024-04-18 09:16:23.451806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:47:21.451 [2024-04-18 09:16:23.451913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.451 [2024-04-18 09:16:23.452112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.451 [2024-04-18 09:16:23.452169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:21.451 [2024-04-18 09:16:23.452259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:47:21.451 [2024-04-18 09:16:23.452312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.451 [2024-04-18 09:16:23.477663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.451 [2024-04-18 09:16:23.477880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:21.451 [2024-04-18 09:16:23.478021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.295 ms 00:47:21.451 [2024-04-18 09:16:23.478060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.451 [2024-04-18 09:16:23.499306] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:47:21.451 [2024-04-18 09:16:23.499536] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:47:21.451 [2024-04-18 09:16:23.499672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.451 [2024-04-18 09:16:23.499711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:47:21.451 [2024-04-18 09:16:23.499748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.449 ms 00:47:21.451 [2024-04-18 09:16:23.499783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.451 [2024-04-18 09:16:23.533492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.451 [2024-04-18 09:16:23.533683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:47:21.451 [2024-04-18 09:16:23.533765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.632 ms 00:47:21.451 [2024-04-18 09:16:23.533803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.709 [2024-04-18 09:16:23.554706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.709 [2024-04-18 09:16:23.554905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:47:21.709 [2024-04-18 09:16:23.555023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.812 ms 00:47:21.709 [2024-04-18 09:16:23.555062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.709 [2024-04-18 09:16:23.575682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.709 [2024-04-18 09:16:23.575896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:47:21.709 [2024-04-18 09:16:23.575989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.536 ms 00:47:21.709 [2024-04-18 09:16:23.576034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.709 [2024-04-18 09:16:23.576752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.709 [2024-04-18 09:16:23.576881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:47:21.709 [2024-04-18 09:16:23.576958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:47:21.709 [2024-04-18 09:16:23.576996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.709 [2024-04-18 09:16:23.674599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.710 [2024-04-18 09:16:23.674836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:47:21.710 [2024-04-18 09:16:23.674924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.554 ms 00:47:21.710 [2024-04-18 09:16:23.674960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.710 [2024-04-18 09:16:23.689989] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:47:21.710 [2024-04-18 09:16:23.693543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.710 [2024-04-18 09:16:23.693690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:47:21.710 [2024-04-18 09:16:23.693786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.456 ms 00:47:21.710 [2024-04-18 09:16:23.693825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.710 [2024-04-18 09:16:23.693961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.710 [2024-04-18 09:16:23.694013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:47:21.710 [2024-04-18 09:16:23.694126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:47:21.710 [2024-04-18 09:16:23.694166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.710 [2024-04-18 09:16:23.695713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.710 [2024-04-18 09:16:23.695845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:47:21.710 [2024-04-18 09:16:23.695938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.461 ms 00:47:21.710 [2024-04-18 09:16:23.695976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.710 [2024-04-18 09:16:23.698309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.710 [2024-04-18 09:16:23.698477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:47:21.710 [2024-04-18 09:16:23.698614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.242 ms 00:47:21.710 [2024-04-18 09:16:23.698656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.710 [2024-04-18 09:16:23.698761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.710 [2024-04-18 09:16:23.698798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:47:21.710 [2024-04-18 09:16:23.698829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:47:21.710 [2024-04-18 09:16:23.698927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.710 [2024-04-18 09:16:23.699055] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:47:21.710 [2024-04-18 09:16:23.699143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.710 [2024-04-18 09:16:23.699225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:47:21.710 [2024-04-18 09:16:23.699323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:47:21.710 [2024-04-18 09:16:23.699384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.710 [2024-04-18 09:16:23.738518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.710 [2024-04-18 09:16:23.738759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:47:21.710 [2024-04-18 09:16:23.738881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.071 ms 00:47:21.710 [2024-04-18 09:16:23.738923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.710 [2024-04-18 09:16:23.739032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:21.710 [2024-04-18 09:16:23.739080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:47:21.710 [2024-04-18 09:16:23.739113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:47:21.710 [2024-04-18 09:16:23.739200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:21.710 [2024-04-18 09:16:23.745494] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 440.771 ms, result 0 00:47:52.696  Copying: 31/1024 [MB] (31 MBps) Copying: 64/1024 [MB] (33 MBps) Copying: 98/1024 [MB] (33 MBps) Copying: 131/1024 [MB] (32 MBps) Copying: 164/1024 [MB] (32 MBps) Copying: 198/1024 [MB] (34 MBps) Copying: 235/1024 [MB] (36 MBps) Copying: 270/1024 [MB] (35 MBps) Copying: 304/1024 [MB] (33 MBps) Copying: 340/1024 [MB] (36 MBps) Copying: 375/1024 [MB] (35 MBps) Copying: 408/1024 [MB] (33 MBps) Copying: 440/1024 [MB] (31 MBps) Copying: 473/1024 [MB] (32 MBps) Copying: 508/1024 [MB] (35 MBps) Copying: 545/1024 [MB] (36 MBps) Copying: 582/1024 [MB] (36 MBps) Copying: 619/1024 [MB] (36 MBps) Copying: 653/1024 [MB] (34 MBps) Copying: 690/1024 [MB] (36 MBps) Copying: 725/1024 [MB] (34 MBps) Copying: 758/1024 [MB] (33 MBps) Copying: 791/1024 [MB] (32 MBps) Copying: 823/1024 [MB] (32 MBps) Copying: 858/1024 [MB] (34 MBps) Copying: 890/1024 [MB] (32 MBps) Copying: 918/1024 [MB] (27 MBps) Copying: 946/1024 [MB] (28 MBps) Copying: 978/1024 [MB] (32 MBps) Copying: 1010/1024 [MB] (31 MBps) Copying: 1024/1024 [MB] (average 33 MBps)[2024-04-18 09:16:54.785062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.696 [2024-04-18 09:16:54.785409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:47:52.696 [2024-04-18 09:16:54.785529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:47:52.696 [2024-04-18 09:16:54.785582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.696 [2024-04-18 09:16:54.785700] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:47:52.696 [2024-04-18 09:16:54.791971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.696 [2024-04-18 09:16:54.792198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:47:52.696 [2024-04-18 09:16:54.792305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.108 ms 00:47:52.696 [2024-04-18 09:16:54.792356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.696 [2024-04-18 09:16:54.792728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.696 [2024-04-18 09:16:54.792859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:47:52.696 [2024-04-18 09:16:54.792966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:47:52.696 [2024-04-18 09:16:54.793082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.696 [2024-04-18 09:16:54.798483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.696 [2024-04-18 09:16:54.798668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:47:52.696 [2024-04-18 09:16:54.798790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.333 ms 00:47:52.955 [2024-04-18 09:16:54.798884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.955 [2024-04-18 09:16:54.805405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.955 [2024-04-18 09:16:54.805534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:47:52.955 [2024-04-18 09:16:54.805624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.449 ms 00:47:52.955 [2024-04-18 09:16:54.805662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.955 [2024-04-18 09:16:54.849443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.955 [2024-04-18 09:16:54.849663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:47:52.955 [2024-04-18 09:16:54.849752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.714 ms 00:47:52.955 [2024-04-18 09:16:54.849792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.955 [2024-04-18 09:16:54.875278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.955 [2024-04-18 09:16:54.875550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:47:52.955 [2024-04-18 09:16:54.875653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.400 ms 00:47:52.955 [2024-04-18 09:16:54.875707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.955 [2024-04-18 09:16:54.947686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.955 [2024-04-18 09:16:54.947950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:47:52.955 [2024-04-18 09:16:54.948055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.893 ms 00:47:52.955 [2024-04-18 09:16:54.948098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.955 [2024-04-18 09:16:54.992360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.955 [2024-04-18 09:16:54.992569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:47:52.955 [2024-04-18 09:16:54.992662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.167 ms 00:47:52.955 [2024-04-18 09:16:54.992703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.955 [2024-04-18 09:16:55.034395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.955 [2024-04-18 09:16:55.034584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:47:52.955 [2024-04-18 09:16:55.034705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.562 ms 00:47:52.955 [2024-04-18 09:16:55.034743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.215 [2024-04-18 09:16:55.076982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:53.215 [2024-04-18 09:16:55.077224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:47:53.215 [2024-04-18 09:16:55.077332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.169 ms 00:47:53.215 [2024-04-18 09:16:55.077384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.215 [2024-04-18 09:16:55.121606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:53.215 [2024-04-18 09:16:55.121824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:47:53.215 [2024-04-18 09:16:55.121914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.072 ms 00:47:53.215 [2024-04-18 09:16:55.121952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.215 [2024-04-18 09:16:55.122088] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:47:53.215 [2024-04-18 09:16:55.122190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133376 / 261120 wr_cnt: 1 state: open 00:47:53.215 [2024-04-18 09:16:55.122284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.122467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.122565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.122648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.122704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.122815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.122907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.123052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.123144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.123200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.123298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.123352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.123442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.123550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.123712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.123772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.123876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.123931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.124037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.124093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.124148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.124237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.124291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.124386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.124446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.124501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.124605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.124702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.124761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.124870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.124928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.124995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.125097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.125253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.125398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.125499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.125560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.125754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.125889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.125974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.126033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.126091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.126232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.126318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.126395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.126460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.126570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.126629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.126777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.126882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.126985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.127047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.127135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.127230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.127368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.127484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.127593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.127712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.127811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.127905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.127964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.128123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.128240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.128350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.128427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:47:53.215 [2024-04-18 09:16:55.128513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.128609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.128744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.128825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.128882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.128938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.129038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.129105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.129188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.129245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.129300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.129427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.129509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.129600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.129704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.129835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.129930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.130076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.130164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.130224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.130280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.130396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.130453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.130590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.130726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.130809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.130939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.131035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.131168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.131282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.131391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.131454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.131579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.131640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:47:53.216 [2024-04-18 09:16:55.131768] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:47:53.216 [2024-04-18 09:16:55.131880] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a06ffee-18e7-4484-82dd-d6d34a9070af 00:47:53.216 [2024-04-18 09:16:55.132017] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133376 00:47:53.216 [2024-04-18 09:16:55.132076] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 15552 00:47:53.216 [2024-04-18 09:16:55.132135] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 14592 00:47:53.216 [2024-04-18 09:16:55.132172] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0658 00:47:53.216 [2024-04-18 09:16:55.132222] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:47:53.216 [2024-04-18 09:16:55.132334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:47:53.216 [2024-04-18 09:16:55.132389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:47:53.216 [2024-04-18 09:16:55.132491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:47:53.216 [2024-04-18 09:16:55.132534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:47:53.216 [2024-04-18 09:16:55.132636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:53.216 [2024-04-18 09:16:55.132680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:47:53.216 [2024-04-18 09:16:55.132776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.548 ms 00:47:53.216 [2024-04-18 09:16:55.132819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.216 [2024-04-18 09:16:55.156692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:53.216 [2024-04-18 09:16:55.156911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:47:53.216 [2024-04-18 09:16:55.157020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.719 ms 00:47:53.216 [2024-04-18 09:16:55.157061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.216 [2024-04-18 09:16:55.157450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:53.216 [2024-04-18 09:16:55.157561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:47:53.216 [2024-04-18 09:16:55.157643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:47:53.216 [2024-04-18 09:16:55.157732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.216 [2024-04-18 09:16:55.218280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:53.216 [2024-04-18 09:16:55.218500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:53.216 [2024-04-18 09:16:55.218599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:53.216 [2024-04-18 09:16:55.218641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.216 [2024-04-18 09:16:55.218801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:53.216 [2024-04-18 09:16:55.218846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:53.216 [2024-04-18 09:16:55.218940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:53.216 [2024-04-18 09:16:55.219046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.216 [2024-04-18 09:16:55.219170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:53.216 [2024-04-18 09:16:55.219312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:53.216 [2024-04-18 09:16:55.219356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:53.216 [2024-04-18 09:16:55.219403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.216 [2024-04-18 09:16:55.219450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:53.216 [2024-04-18 09:16:55.219496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:53.216 [2024-04-18 09:16:55.219534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:53.216 [2024-04-18 09:16:55.219568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.475 [2024-04-18 09:16:55.350712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:53.475 [2024-04-18 09:16:55.350894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:53.475 [2024-04-18 09:16:55.350968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:53.475 [2024-04-18 09:16:55.351003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.475 [2024-04-18 09:16:55.402378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:53.475 [2024-04-18 09:16:55.402640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:53.475 [2024-04-18 09:16:55.402742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:53.475 [2024-04-18 09:16:55.402780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.475 [2024-04-18 09:16:55.402871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:53.475 [2024-04-18 09:16:55.402944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:53.475 [2024-04-18 09:16:55.402982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:53.475 [2024-04-18 09:16:55.403014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.475 [2024-04-18 09:16:55.403124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:53.475 [2024-04-18 09:16:55.403165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:53.475 [2024-04-18 09:16:55.403345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:53.475 [2024-04-18 09:16:55.403404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.475 [2024-04-18 09:16:55.403558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:53.475 [2024-04-18 09:16:55.403607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:53.475 [2024-04-18 09:16:55.403678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:53.475 [2024-04-18 09:16:55.403711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.475 [2024-04-18 09:16:55.403772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:53.475 [2024-04-18 09:16:55.403809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:47:53.475 [2024-04-18 09:16:55.403841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:53.475 [2024-04-18 09:16:55.403878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.475 [2024-04-18 09:16:55.404054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:53.475 [2024-04-18 09:16:55.404093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:53.475 [2024-04-18 09:16:55.404127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:53.475 [2024-04-18 09:16:55.404162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.475 [2024-04-18 09:16:55.404231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:53.475 [2024-04-18 09:16:55.404268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:53.475 [2024-04-18 09:16:55.404309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:53.475 [2024-04-18 09:16:55.404485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.475 [2024-04-18 09:16:55.404654] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 619.559 ms, result 0 00:47:54.851 00:47:54.851 00:47:55.110 09:16:56 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:47:57.656 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:47:57.656 09:16:59 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:47:57.656 09:16:59 -- ftl/restore.sh@85 -- # restore_kill 00:47:57.656 09:16:59 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:47:57.656 09:16:59 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:47:57.656 09:16:59 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:47:57.656 09:16:59 -- ftl/restore.sh@32 -- # killprocess 79704 00:47:57.656 09:16:59 -- common/autotest_common.sh@936 -- # '[' -z 79704 ']' 00:47:57.656 09:16:59 -- common/autotest_common.sh@940 -- # kill -0 79704 00:47:57.656 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (79704) - No such process 00:47:57.656 09:16:59 -- common/autotest_common.sh@963 -- # echo 'Process with pid 79704 is not found' 00:47:57.656 Process with pid 79704 is not found 00:47:57.656 09:16:59 -- ftl/restore.sh@33 -- # remove_shm 00:47:57.656 09:16:59 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:47:57.656 Remove shared memory files 00:47:57.656 09:16:59 -- ftl/common.sh@205 -- # rm -f rm -f 00:47:57.656 09:16:59 -- ftl/common.sh@206 -- # rm -f rm -f 00:47:57.656 09:16:59 -- ftl/common.sh@207 -- # rm -f rm -f 00:47:57.656 09:16:59 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:47:57.656 09:16:59 -- ftl/common.sh@209 -- # rm -f rm -f 00:47:57.656 ************************************ 00:47:57.656 END TEST ftl_restore 00:47:57.656 ************************************ 00:47:57.656 00:47:57.656 real 2m43.155s 00:47:57.656 user 2m29.421s 00:47:57.656 sys 0m15.445s 00:47:57.656 09:16:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:47:57.656 09:16:59 -- common/autotest_common.sh@10 -- # set +x 00:47:57.656 09:16:59 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:47:57.656 09:16:59 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:47:57.656 09:16:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:47:57.656 09:16:59 -- common/autotest_common.sh@10 -- # set +x 00:47:57.656 ************************************ 00:47:57.656 START TEST ftl_dirty_shutdown 00:47:57.656 ************************************ 00:47:57.656 09:16:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:47:57.656 * Looking for test storage... 00:47:57.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:47:57.656 09:16:59 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:47:57.656 09:16:59 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:47:57.656 09:16:59 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:47:57.656 09:16:59 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:47:57.656 09:16:59 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:47:57.656 09:16:59 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:47:57.656 09:16:59 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:47:57.656 09:16:59 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:47:57.656 09:16:59 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:47:57.656 09:16:59 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:57.656 09:16:59 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:57.656 09:16:59 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:47:57.656 09:16:59 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:47:57.656 09:16:59 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:47:57.656 09:16:59 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:47:57.656 09:16:59 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:47:57.656 09:16:59 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:47:57.656 09:16:59 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:57.656 09:16:59 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:57.656 09:16:59 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:47:57.656 09:16:59 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:47:57.656 09:16:59 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:47:57.656 09:16:59 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:47:57.656 09:16:59 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:47:57.656 09:16:59 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:47:57.656 09:16:59 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:47:57.656 09:16:59 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:47:57.656 09:16:59 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:57.656 09:16:59 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:57.656 09:16:59 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:47:57.656 09:16:59 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:57.656 09:16:59 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:47:57.656 09:16:59 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:47:57.656 09:16:59 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:47:57.656 09:16:59 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:47:57.656 09:16:59 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:47:57.656 09:16:59 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:47:57.656 09:16:59 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:47:57.656 09:16:59 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:47:57.656 09:16:59 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:47:57.656 09:16:59 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:47:57.656 09:16:59 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:47:57.656 09:16:59 -- ftl/dirty_shutdown.sh@45 -- # svcpid=81436 00:47:57.656 09:16:59 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81436 00:47:57.656 09:16:59 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:47:57.656 09:16:59 -- common/autotest_common.sh@817 -- # '[' -z 81436 ']' 00:47:57.656 09:16:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:57.656 09:16:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:47:57.657 09:16:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:57.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:57.657 09:16:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:47:57.657 09:16:59 -- common/autotest_common.sh@10 -- # set +x 00:47:57.657 [2024-04-18 09:16:59.732071] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:47:57.657 [2024-04-18 09:16:59.732419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81436 ] 00:47:57.915 [2024-04-18 09:16:59.905494] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:58.172 [2024-04-18 09:17:00.250793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:59.543 09:17:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:47:59.543 09:17:01 -- common/autotest_common.sh@850 -- # return 0 00:47:59.543 09:17:01 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:47:59.543 09:17:01 -- ftl/common.sh@54 -- # local name=nvme0 00:47:59.543 09:17:01 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:47:59.543 09:17:01 -- ftl/common.sh@56 -- # local size=103424 00:47:59.543 09:17:01 -- ftl/common.sh@59 -- # local base_bdev 00:47:59.543 09:17:01 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:47:59.801 09:17:01 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:47:59.801 09:17:01 -- ftl/common.sh@62 -- # local base_size 00:47:59.801 09:17:01 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:47:59.801 09:17:01 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:47:59.801 09:17:01 -- common/autotest_common.sh@1365 -- # local bdev_info 00:47:59.801 09:17:01 -- common/autotest_common.sh@1366 -- # local bs 00:47:59.801 09:17:01 -- common/autotest_common.sh@1367 -- # local nb 00:47:59.801 09:17:01 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:48:00.070 09:17:01 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:48:00.070 { 00:48:00.070 "name": "nvme0n1", 00:48:00.070 "aliases": [ 00:48:00.070 "5c27aa34-42a7-42e1-bb7d-3e653caa5d75" 00:48:00.070 ], 00:48:00.070 "product_name": "NVMe disk", 00:48:00.070 "block_size": 4096, 00:48:00.070 "num_blocks": 1310720, 00:48:00.070 "uuid": "5c27aa34-42a7-42e1-bb7d-3e653caa5d75", 00:48:00.070 "assigned_rate_limits": { 00:48:00.070 "rw_ios_per_sec": 0, 00:48:00.070 "rw_mbytes_per_sec": 0, 00:48:00.070 "r_mbytes_per_sec": 0, 00:48:00.070 "w_mbytes_per_sec": 0 00:48:00.070 }, 00:48:00.070 "claimed": true, 00:48:00.070 "claim_type": "read_many_write_one", 00:48:00.070 "zoned": false, 00:48:00.070 "supported_io_types": { 00:48:00.070 "read": true, 00:48:00.070 "write": true, 00:48:00.070 "unmap": true, 00:48:00.070 "write_zeroes": true, 00:48:00.070 "flush": true, 00:48:00.070 "reset": true, 00:48:00.070 "compare": true, 00:48:00.070 "compare_and_write": false, 00:48:00.070 "abort": true, 00:48:00.070 "nvme_admin": true, 00:48:00.070 "nvme_io": true 00:48:00.070 }, 00:48:00.070 "driver_specific": { 00:48:00.070 "nvme": [ 00:48:00.070 { 00:48:00.070 "pci_address": "0000:00:11.0", 00:48:00.070 "trid": { 00:48:00.070 "trtype": "PCIe", 00:48:00.070 "traddr": "0000:00:11.0" 00:48:00.070 }, 00:48:00.070 "ctrlr_data": { 00:48:00.070 "cntlid": 0, 00:48:00.070 "vendor_id": "0x1b36", 00:48:00.070 "model_number": "QEMU NVMe Ctrl", 00:48:00.070 "serial_number": "12341", 00:48:00.070 "firmware_revision": "8.0.0", 00:48:00.070 "subnqn": "nqn.2019-08.org.qemu:12341", 00:48:00.070 "oacs": { 00:48:00.070 "security": 0, 00:48:00.070 "format": 1, 00:48:00.070 "firmware": 0, 00:48:00.070 "ns_manage": 1 00:48:00.070 }, 00:48:00.070 "multi_ctrlr": false, 00:48:00.070 "ana_reporting": false 00:48:00.070 }, 00:48:00.070 "vs": { 00:48:00.070 "nvme_version": "1.4" 00:48:00.070 }, 00:48:00.070 "ns_data": { 00:48:00.070 "id": 1, 00:48:00.070 "can_share": false 00:48:00.070 } 00:48:00.070 } 00:48:00.070 ], 00:48:00.070 "mp_policy": "active_passive" 00:48:00.070 } 00:48:00.070 } 00:48:00.070 ]' 00:48:00.070 09:17:01 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:48:00.070 09:17:01 -- common/autotest_common.sh@1369 -- # bs=4096 00:48:00.070 09:17:01 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:48:00.070 09:17:02 -- common/autotest_common.sh@1370 -- # nb=1310720 00:48:00.070 09:17:02 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:48:00.070 09:17:02 -- common/autotest_common.sh@1374 -- # echo 5120 00:48:00.070 09:17:02 -- ftl/common.sh@63 -- # base_size=5120 00:48:00.070 09:17:02 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:48:00.070 09:17:02 -- ftl/common.sh@67 -- # clear_lvols 00:48:00.070 09:17:02 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:48:00.070 09:17:02 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:48:00.328 09:17:02 -- ftl/common.sh@28 -- # stores=552a5f49-696a-4198-bf2a-812072e257d0 00:48:00.328 09:17:02 -- ftl/common.sh@29 -- # for lvs in $stores 00:48:00.328 09:17:02 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 552a5f49-696a-4198-bf2a-812072e257d0 00:48:00.584 09:17:02 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:48:00.843 09:17:02 -- ftl/common.sh@68 -- # lvs=eada9c4a-bc5c-4bdf-bbe5-6b53d4049260 00:48:00.843 09:17:02 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u eada9c4a-bc5c-4bdf-bbe5-6b53d4049260 00:48:00.843 09:17:02 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=38b8ca22-8f06-475c-a8df-df50687f8a2f 00:48:00.843 09:17:02 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:48:00.843 09:17:02 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 38b8ca22-8f06-475c-a8df-df50687f8a2f 00:48:00.843 09:17:02 -- ftl/common.sh@35 -- # local name=nvc0 00:48:00.843 09:17:02 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:48:00.843 09:17:02 -- ftl/common.sh@37 -- # local base_bdev=38b8ca22-8f06-475c-a8df-df50687f8a2f 00:48:00.843 09:17:02 -- ftl/common.sh@38 -- # local cache_size= 00:48:00.843 09:17:02 -- ftl/common.sh@41 -- # get_bdev_size 38b8ca22-8f06-475c-a8df-df50687f8a2f 00:48:00.843 09:17:02 -- common/autotest_common.sh@1364 -- # local bdev_name=38b8ca22-8f06-475c-a8df-df50687f8a2f 00:48:00.843 09:17:02 -- common/autotest_common.sh@1365 -- # local bdev_info 00:48:00.843 09:17:02 -- common/autotest_common.sh@1366 -- # local bs 00:48:00.843 09:17:02 -- common/autotest_common.sh@1367 -- # local nb 00:48:00.843 09:17:02 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 38b8ca22-8f06-475c-a8df-df50687f8a2f 00:48:01.101 09:17:03 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:48:01.101 { 00:48:01.102 "name": "38b8ca22-8f06-475c-a8df-df50687f8a2f", 00:48:01.102 "aliases": [ 00:48:01.102 "lvs/nvme0n1p0" 00:48:01.102 ], 00:48:01.102 "product_name": "Logical Volume", 00:48:01.102 "block_size": 4096, 00:48:01.102 "num_blocks": 26476544, 00:48:01.102 "uuid": "38b8ca22-8f06-475c-a8df-df50687f8a2f", 00:48:01.102 "assigned_rate_limits": { 00:48:01.102 "rw_ios_per_sec": 0, 00:48:01.102 "rw_mbytes_per_sec": 0, 00:48:01.102 "r_mbytes_per_sec": 0, 00:48:01.102 "w_mbytes_per_sec": 0 00:48:01.102 }, 00:48:01.102 "claimed": false, 00:48:01.102 "zoned": false, 00:48:01.102 "supported_io_types": { 00:48:01.102 "read": true, 00:48:01.102 "write": true, 00:48:01.102 "unmap": true, 00:48:01.102 "write_zeroes": true, 00:48:01.102 "flush": false, 00:48:01.102 "reset": true, 00:48:01.102 "compare": false, 00:48:01.102 "compare_and_write": false, 00:48:01.102 "abort": false, 00:48:01.102 "nvme_admin": false, 00:48:01.102 "nvme_io": false 00:48:01.102 }, 00:48:01.102 "driver_specific": { 00:48:01.102 "lvol": { 00:48:01.102 "lvol_store_uuid": "eada9c4a-bc5c-4bdf-bbe5-6b53d4049260", 00:48:01.102 "base_bdev": "nvme0n1", 00:48:01.102 "thin_provision": true, 00:48:01.102 "snapshot": false, 00:48:01.102 "clone": false, 00:48:01.102 "esnap_clone": false 00:48:01.102 } 00:48:01.102 } 00:48:01.102 } 00:48:01.102 ]' 00:48:01.102 09:17:03 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:48:01.102 09:17:03 -- common/autotest_common.sh@1369 -- # bs=4096 00:48:01.102 09:17:03 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:48:01.358 09:17:03 -- common/autotest_common.sh@1370 -- # nb=26476544 00:48:01.358 09:17:03 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:48:01.358 09:17:03 -- common/autotest_common.sh@1374 -- # echo 103424 00:48:01.358 09:17:03 -- ftl/common.sh@41 -- # local base_size=5171 00:48:01.358 09:17:03 -- ftl/common.sh@44 -- # local nvc_bdev 00:48:01.358 09:17:03 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:48:01.616 09:17:03 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:48:01.616 09:17:03 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:48:01.616 09:17:03 -- ftl/common.sh@48 -- # get_bdev_size 38b8ca22-8f06-475c-a8df-df50687f8a2f 00:48:01.616 09:17:03 -- common/autotest_common.sh@1364 -- # local bdev_name=38b8ca22-8f06-475c-a8df-df50687f8a2f 00:48:01.616 09:17:03 -- common/autotest_common.sh@1365 -- # local bdev_info 00:48:01.616 09:17:03 -- common/autotest_common.sh@1366 -- # local bs 00:48:01.616 09:17:03 -- common/autotest_common.sh@1367 -- # local nb 00:48:01.616 09:17:03 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 38b8ca22-8f06-475c-a8df-df50687f8a2f 00:48:01.874 09:17:03 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:48:01.874 { 00:48:01.874 "name": "38b8ca22-8f06-475c-a8df-df50687f8a2f", 00:48:01.874 "aliases": [ 00:48:01.874 "lvs/nvme0n1p0" 00:48:01.874 ], 00:48:01.874 "product_name": "Logical Volume", 00:48:01.874 "block_size": 4096, 00:48:01.874 "num_blocks": 26476544, 00:48:01.874 "uuid": "38b8ca22-8f06-475c-a8df-df50687f8a2f", 00:48:01.874 "assigned_rate_limits": { 00:48:01.874 "rw_ios_per_sec": 0, 00:48:01.874 "rw_mbytes_per_sec": 0, 00:48:01.874 "r_mbytes_per_sec": 0, 00:48:01.874 "w_mbytes_per_sec": 0 00:48:01.874 }, 00:48:01.874 "claimed": false, 00:48:01.874 "zoned": false, 00:48:01.874 "supported_io_types": { 00:48:01.874 "read": true, 00:48:01.874 "write": true, 00:48:01.874 "unmap": true, 00:48:01.874 "write_zeroes": true, 00:48:01.874 "flush": false, 00:48:01.874 "reset": true, 00:48:01.874 "compare": false, 00:48:01.874 "compare_and_write": false, 00:48:01.874 "abort": false, 00:48:01.874 "nvme_admin": false, 00:48:01.874 "nvme_io": false 00:48:01.874 }, 00:48:01.874 "driver_specific": { 00:48:01.874 "lvol": { 00:48:01.874 "lvol_store_uuid": "eada9c4a-bc5c-4bdf-bbe5-6b53d4049260", 00:48:01.874 "base_bdev": "nvme0n1", 00:48:01.874 "thin_provision": true, 00:48:01.874 "snapshot": false, 00:48:01.874 "clone": false, 00:48:01.874 "esnap_clone": false 00:48:01.874 } 00:48:01.874 } 00:48:01.874 } 00:48:01.874 ]' 00:48:01.874 09:17:03 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:48:01.874 09:17:03 -- common/autotest_common.sh@1369 -- # bs=4096 00:48:01.874 09:17:03 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:48:01.874 09:17:03 -- common/autotest_common.sh@1370 -- # nb=26476544 00:48:01.874 09:17:03 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:48:01.874 09:17:03 -- common/autotest_common.sh@1374 -- # echo 103424 00:48:01.874 09:17:03 -- ftl/common.sh@48 -- # cache_size=5171 00:48:01.874 09:17:03 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:48:02.135 09:17:04 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:48:02.135 09:17:04 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 38b8ca22-8f06-475c-a8df-df50687f8a2f 00:48:02.135 09:17:04 -- common/autotest_common.sh@1364 -- # local bdev_name=38b8ca22-8f06-475c-a8df-df50687f8a2f 00:48:02.135 09:17:04 -- common/autotest_common.sh@1365 -- # local bdev_info 00:48:02.135 09:17:04 -- common/autotest_common.sh@1366 -- # local bs 00:48:02.135 09:17:04 -- common/autotest_common.sh@1367 -- # local nb 00:48:02.135 09:17:04 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 38b8ca22-8f06-475c-a8df-df50687f8a2f 00:48:02.393 09:17:04 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:48:02.393 { 00:48:02.393 "name": "38b8ca22-8f06-475c-a8df-df50687f8a2f", 00:48:02.393 "aliases": [ 00:48:02.393 "lvs/nvme0n1p0" 00:48:02.393 ], 00:48:02.393 "product_name": "Logical Volume", 00:48:02.393 "block_size": 4096, 00:48:02.393 "num_blocks": 26476544, 00:48:02.393 "uuid": "38b8ca22-8f06-475c-a8df-df50687f8a2f", 00:48:02.393 "assigned_rate_limits": { 00:48:02.393 "rw_ios_per_sec": 0, 00:48:02.393 "rw_mbytes_per_sec": 0, 00:48:02.393 "r_mbytes_per_sec": 0, 00:48:02.393 "w_mbytes_per_sec": 0 00:48:02.393 }, 00:48:02.393 "claimed": false, 00:48:02.393 "zoned": false, 00:48:02.393 "supported_io_types": { 00:48:02.393 "read": true, 00:48:02.393 "write": true, 00:48:02.393 "unmap": true, 00:48:02.393 "write_zeroes": true, 00:48:02.393 "flush": false, 00:48:02.393 "reset": true, 00:48:02.393 "compare": false, 00:48:02.393 "compare_and_write": false, 00:48:02.393 "abort": false, 00:48:02.393 "nvme_admin": false, 00:48:02.393 "nvme_io": false 00:48:02.393 }, 00:48:02.393 "driver_specific": { 00:48:02.393 "lvol": { 00:48:02.393 "lvol_store_uuid": "eada9c4a-bc5c-4bdf-bbe5-6b53d4049260", 00:48:02.393 "base_bdev": "nvme0n1", 00:48:02.393 "thin_provision": true, 00:48:02.393 "snapshot": false, 00:48:02.393 "clone": false, 00:48:02.393 "esnap_clone": false 00:48:02.393 } 00:48:02.393 } 00:48:02.393 } 00:48:02.393 ]' 00:48:02.393 09:17:04 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:48:02.393 09:17:04 -- common/autotest_common.sh@1369 -- # bs=4096 00:48:02.393 09:17:04 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:48:02.393 09:17:04 -- common/autotest_common.sh@1370 -- # nb=26476544 00:48:02.393 09:17:04 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:48:02.393 09:17:04 -- common/autotest_common.sh@1374 -- # echo 103424 00:48:02.393 09:17:04 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:48:02.393 09:17:04 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 38b8ca22-8f06-475c-a8df-df50687f8a2f --l2p_dram_limit 10' 00:48:02.393 09:17:04 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:48:02.393 09:17:04 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:48:02.393 09:17:04 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:48:02.393 09:17:04 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 38b8ca22-8f06-475c-a8df-df50687f8a2f --l2p_dram_limit 10 -c nvc0n1p0 00:48:02.653 [2024-04-18 09:17:04.632813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.653 [2024-04-18 09:17:04.633073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:48:02.653 [2024-04-18 09:17:04.633200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:48:02.653 [2024-04-18 09:17:04.633242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.653 [2024-04-18 09:17:04.633349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.653 [2024-04-18 09:17:04.633475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:48:02.653 [2024-04-18 09:17:04.633559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:48:02.653 [2024-04-18 09:17:04.633602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.653 [2024-04-18 09:17:04.633656] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:48:02.653 [2024-04-18 09:17:04.635041] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:48:02.653 [2024-04-18 09:17:04.635198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.653 [2024-04-18 09:17:04.635275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:48:02.653 [2024-04-18 09:17:04.635317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.546 ms 00:48:02.653 [2024-04-18 09:17:04.635380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.653 [2024-04-18 09:17:04.635717] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 73c17e2f-1251-4840-aae8-19f53c13030e 00:48:02.653 [2024-04-18 09:17:04.637210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.653 [2024-04-18 09:17:04.637344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:48:02.653 [2024-04-18 09:17:04.637439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:48:02.653 [2024-04-18 09:17:04.637481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.653 [2024-04-18 09:17:04.645138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.653 [2024-04-18 09:17:04.645301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:48:02.653 [2024-04-18 09:17:04.645409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.556 ms 00:48:02.653 [2024-04-18 09:17:04.645458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.653 [2024-04-18 09:17:04.645609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.653 [2024-04-18 09:17:04.645686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:48:02.653 [2024-04-18 09:17:04.645746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:48:02.653 [2024-04-18 09:17:04.645783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.653 [2024-04-18 09:17:04.645877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.653 [2024-04-18 09:17:04.645920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:48:02.653 [2024-04-18 09:17:04.646046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:48:02.653 [2024-04-18 09:17:04.646097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.653 [2024-04-18 09:17:04.646166] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:48:02.653 [2024-04-18 09:17:04.653308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.653 [2024-04-18 09:17:04.653457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:48:02.653 [2024-04-18 09:17:04.653549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.145 ms 00:48:02.653 [2024-04-18 09:17:04.653590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.653 [2024-04-18 09:17:04.653656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.653 [2024-04-18 09:17:04.653746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:48:02.653 [2024-04-18 09:17:04.653813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:48:02.653 [2024-04-18 09:17:04.653847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.653 [2024-04-18 09:17:04.653928] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:48:02.653 [2024-04-18 09:17:04.654085] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:48:02.653 [2024-04-18 09:17:04.654153] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:48:02.653 [2024-04-18 09:17:04.654211] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:48:02.653 [2024-04-18 09:17:04.654372] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:48:02.653 [2024-04-18 09:17:04.654493] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:48:02.653 [2024-04-18 09:17:04.654554] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:48:02.653 [2024-04-18 09:17:04.654648] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:48:02.653 [2024-04-18 09:17:04.654712] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:48:02.653 [2024-04-18 09:17:04.654774] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:48:02.653 [2024-04-18 09:17:04.654824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.653 [2024-04-18 09:17:04.654856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:48:02.653 [2024-04-18 09:17:04.654891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:48:02.653 [2024-04-18 09:17:04.654924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.653 [2024-04-18 09:17:04.655016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.653 [2024-04-18 09:17:04.655085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:48:02.653 [2024-04-18 09:17:04.655147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:48:02.653 [2024-04-18 09:17:04.655209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.653 [2024-04-18 09:17:04.655305] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:48:02.653 [2024-04-18 09:17:04.655341] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:48:02.653 [2024-04-18 09:17:04.655388] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:48:02.653 [2024-04-18 09:17:04.655501] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:02.653 [2024-04-18 09:17:04.655551] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:48:02.653 [2024-04-18 09:17:04.655607] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:48:02.653 [2024-04-18 09:17:04.655646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:48:02.653 [2024-04-18 09:17:04.655679] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:48:02.653 [2024-04-18 09:17:04.655771] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:48:02.654 [2024-04-18 09:17:04.655804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:48:02.654 [2024-04-18 09:17:04.655840] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:48:02.654 [2024-04-18 09:17:04.655872] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:48:02.654 [2024-04-18 09:17:04.655959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:48:02.654 [2024-04-18 09:17:04.656028] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:48:02.654 [2024-04-18 09:17:04.656117] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:48:02.654 [2024-04-18 09:17:04.656157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:02.654 [2024-04-18 09:17:04.656234] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:48:02.654 [2024-04-18 09:17:04.656274] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:48:02.654 [2024-04-18 09:17:04.656312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:02.654 [2024-04-18 09:17:04.656434] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:48:02.654 [2024-04-18 09:17:04.656503] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:48:02.654 [2024-04-18 09:17:04.656538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:48:02.654 [2024-04-18 09:17:04.656575] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:48:02.654 [2024-04-18 09:17:04.656610] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:48:02.654 [2024-04-18 09:17:04.656646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:48:02.654 [2024-04-18 09:17:04.656681] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:48:02.654 [2024-04-18 09:17:04.656775] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:48:02.654 [2024-04-18 09:17:04.656816] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:48:02.654 [2024-04-18 09:17:04.656853] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:48:02.654 [2024-04-18 09:17:04.656888] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:48:02.654 [2024-04-18 09:17:04.656924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:48:02.654 [2024-04-18 09:17:04.656959] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:48:02.654 [2024-04-18 09:17:04.657062] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:48:02.654 [2024-04-18 09:17:04.657101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:48:02.654 [2024-04-18 09:17:04.657137] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:48:02.654 [2024-04-18 09:17:04.657172] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:48:02.654 [2024-04-18 09:17:04.657210] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:48:02.654 [2024-04-18 09:17:04.657305] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:48:02.654 [2024-04-18 09:17:04.657341] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:48:02.654 [2024-04-18 09:17:04.657376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:48:02.654 [2024-04-18 09:17:04.657459] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:48:02.654 [2024-04-18 09:17:04.657552] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:48:02.654 [2024-04-18 09:17:04.657600] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:48:02.654 [2024-04-18 09:17:04.657674] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:02.654 [2024-04-18 09:17:04.657763] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:48:02.654 [2024-04-18 09:17:04.657805] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:48:02.654 [2024-04-18 09:17:04.657879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:48:02.654 [2024-04-18 09:17:04.657960] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:48:02.654 [2024-04-18 09:17:04.658004] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:48:02.654 [2024-04-18 09:17:04.658074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:48:02.654 [2024-04-18 09:17:04.658159] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:48:02.654 [2024-04-18 09:17:04.658260] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:02.654 [2024-04-18 09:17:04.658392] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:48:02.654 [2024-04-18 09:17:04.658492] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:48:02.654 [2024-04-18 09:17:04.658552] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:48:02.654 [2024-04-18 09:17:04.658662] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:48:02.654 [2024-04-18 09:17:04.658719] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:48:02.654 [2024-04-18 09:17:04.658808] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:48:02.654 [2024-04-18 09:17:04.658867] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:48:02.654 [2024-04-18 09:17:04.658918] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:48:02.654 [2024-04-18 09:17:04.659013] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:48:02.654 [2024-04-18 09:17:04.659066] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:48:02.654 [2024-04-18 09:17:04.659119] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:48:02.654 [2024-04-18 09:17:04.659201] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:48:02.654 [2024-04-18 09:17:04.659255] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:48:02.654 [2024-04-18 09:17:04.659341] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:48:02.654 [2024-04-18 09:17:04.659483] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:02.654 [2024-04-18 09:17:04.659538] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:48:02.654 [2024-04-18 09:17:04.659596] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:48:02.654 [2024-04-18 09:17:04.659706] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:48:02.654 [2024-04-18 09:17:04.659763] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:48:02.654 [2024-04-18 09:17:04.659816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.654 [2024-04-18 09:17:04.659853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:48:02.654 [2024-04-18 09:17:04.659932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.552 ms 00:48:02.654 [2024-04-18 09:17:04.659969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.654 [2024-04-18 09:17:04.687305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.654 [2024-04-18 09:17:04.687523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:48:02.654 [2024-04-18 09:17:04.687624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.200 ms 00:48:02.654 [2024-04-18 09:17:04.687731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.654 [2024-04-18 09:17:04.687861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.654 [2024-04-18 09:17:04.687904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:48:02.654 [2024-04-18 09:17:04.688004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:48:02.654 [2024-04-18 09:17:04.688049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.654 [2024-04-18 09:17:04.744096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.654 [2024-04-18 09:17:04.744303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:48:02.654 [2024-04-18 09:17:04.744444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.907 ms 00:48:02.654 [2024-04-18 09:17:04.744541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.654 [2024-04-18 09:17:04.744618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.654 [2024-04-18 09:17:04.744701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:48:02.654 [2024-04-18 09:17:04.744799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:48:02.654 [2024-04-18 09:17:04.744846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.654 [2024-04-18 09:17:04.745449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.654 [2024-04-18 09:17:04.745595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:48:02.654 [2024-04-18 09:17:04.745676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:48:02.654 [2024-04-18 09:17:04.745757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.654 [2024-04-18 09:17:04.745906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.654 [2024-04-18 09:17:04.746002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:48:02.654 [2024-04-18 09:17:04.746078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:48:02.654 [2024-04-18 09:17:04.746123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.912 [2024-04-18 09:17:04.772397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.912 [2024-04-18 09:17:04.772596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:48:02.912 [2024-04-18 09:17:04.772714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.223 ms 00:48:02.912 [2024-04-18 09:17:04.772764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.912 [2024-04-18 09:17:04.788683] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:48:02.912 [2024-04-18 09:17:04.792240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.912 [2024-04-18 09:17:04.792417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:48:02.912 [2024-04-18 09:17:04.792512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.331 ms 00:48:02.912 [2024-04-18 09:17:04.792555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.912 [2024-04-18 09:17:04.909089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.912 [2024-04-18 09:17:04.909264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:48:02.912 [2024-04-18 09:17:04.909380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.420 ms 00:48:02.912 [2024-04-18 09:17:04.909474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.912 [2024-04-18 09:17:04.909547] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:48:02.912 [2024-04-18 09:17:04.909704] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:48:07.093 [2024-04-18 09:17:08.757642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.093 [2024-04-18 09:17:08.757844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:48:07.093 [2024-04-18 09:17:08.757974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3848.070 ms 00:48:07.093 [2024-04-18 09:17:08.758017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.093 [2024-04-18 09:17:08.758257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.093 [2024-04-18 09:17:08.758305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:48:07.093 [2024-04-18 09:17:08.758434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:48:07.093 [2024-04-18 09:17:08.758518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.093 [2024-04-18 09:17:08.803771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.093 [2024-04-18 09:17:08.804074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:48:07.093 [2024-04-18 09:17:08.804195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.133 ms 00:48:07.093 [2024-04-18 09:17:08.804237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.093 [2024-04-18 09:17:08.848808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.093 [2024-04-18 09:17:08.849046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:48:07.093 [2024-04-18 09:17:08.849174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.461 ms 00:48:07.093 [2024-04-18 09:17:08.849210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.093 [2024-04-18 09:17:08.849755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.093 [2024-04-18 09:17:08.849880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:48:07.093 [2024-04-18 09:17:08.849971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:48:07.093 [2024-04-18 09:17:08.850010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.094 [2024-04-18 09:17:08.961796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.094 [2024-04-18 09:17:08.962063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:48:07.094 [2024-04-18 09:17:08.962211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.674 ms 00:48:07.094 [2024-04-18 09:17:08.962254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.094 [2024-04-18 09:17:09.010038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.094 [2024-04-18 09:17:09.010288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:48:07.094 [2024-04-18 09:17:09.010436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.682 ms 00:48:07.094 [2024-04-18 09:17:09.010481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.094 [2024-04-18 09:17:09.013047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.094 [2024-04-18 09:17:09.013218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:48:07.094 [2024-04-18 09:17:09.013356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.469 ms 00:48:07.094 [2024-04-18 09:17:09.013422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.094 [2024-04-18 09:17:09.060658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.094 [2024-04-18 09:17:09.060886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:48:07.094 [2024-04-18 09:17:09.061045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.107 ms 00:48:07.094 [2024-04-18 09:17:09.061088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.094 [2024-04-18 09:17:09.061291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.094 [2024-04-18 09:17:09.061343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:48:07.094 [2024-04-18 09:17:09.061480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:48:07.094 [2024-04-18 09:17:09.061525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.094 [2024-04-18 09:17:09.061688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.094 [2024-04-18 09:17:09.061730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:48:07.094 [2024-04-18 09:17:09.061815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:48:07.094 [2024-04-18 09:17:09.061856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.094 [2024-04-18 09:17:09.063098] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4429.769 ms, result 0 00:48:07.094 { 00:48:07.094 "name": "ftl0", 00:48:07.094 "uuid": "73c17e2f-1251-4840-aae8-19f53c13030e" 00:48:07.094 } 00:48:07.094 09:17:09 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:48:07.094 09:17:09 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:48:07.352 09:17:09 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:48:07.352 09:17:09 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:48:07.352 09:17:09 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:48:07.610 /dev/nbd0 00:48:07.610 09:17:09 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:48:07.610 09:17:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:48:07.610 09:17:09 -- common/autotest_common.sh@855 -- # local i 00:48:07.610 09:17:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:48:07.610 09:17:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:48:07.610 09:17:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:48:07.610 09:17:09 -- common/autotest_common.sh@859 -- # break 00:48:07.610 09:17:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:48:07.610 09:17:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:48:07.610 09:17:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:48:07.610 1+0 records in 00:48:07.610 1+0 records out 00:48:07.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000806335 s, 5.1 MB/s 00:48:07.610 09:17:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:48:07.610 09:17:09 -- common/autotest_common.sh@872 -- # size=4096 00:48:07.610 09:17:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:48:07.610 09:17:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:48:07.610 09:17:09 -- common/autotest_common.sh@875 -- # return 0 00:48:07.610 09:17:09 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:48:07.869 [2024-04-18 09:17:09.785599] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:48:07.869 [2024-04-18 09:17:09.786789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81595 ] 00:48:08.126 [2024-04-18 09:17:09.972206] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:08.384 [2024-04-18 09:17:10.305146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:48:16.412  Copying: 174/1024 [MB] (174 MBps) Copying: 351/1024 [MB] (177 MBps) Copying: 531/1024 [MB] (180 MBps) Copying: 710/1024 [MB] (178 MBps) Copying: 877/1024 [MB] (166 MBps) Copying: 1024/1024 [MB] (average 175 MBps) 00:48:16.412 00:48:16.412 09:17:18 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:48:18.313 09:17:20 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:48:18.313 [2024-04-18 09:17:20.222630] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:48:18.313 [2024-04-18 09:17:20.223009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81703 ] 00:48:18.313 [2024-04-18 09:17:20.393064] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:18.880 [2024-04-18 09:17:20.801938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:49:13.543  Copying: 18/1024 [MB] (18 MBps) Copying: 37/1024 [MB] (19 MBps) Copying: 56/1024 [MB] (19 MBps) Copying: 76/1024 [MB] (19 MBps) Copying: 97/1024 [MB] (20 MBps) Copying: 115/1024 [MB] (18 MBps) Copying: 134/1024 [MB] (19 MBps) Copying: 154/1024 [MB] (19 MBps) Copying: 174/1024 [MB] (20 MBps) Copying: 196/1024 [MB] (21 MBps) Copying: 216/1024 [MB] (20 MBps) Copying: 235/1024 [MB] (19 MBps) Copying: 255/1024 [MB] (19 MBps) Copying: 275/1024 [MB] (20 MBps) Copying: 294/1024 [MB] (19 MBps) Copying: 312/1024 [MB] (17 MBps) Copying: 329/1024 [MB] (17 MBps) Copying: 346/1024 [MB] (17 MBps) Copying: 364/1024 [MB] (17 MBps) Copying: 381/1024 [MB] (17 MBps) Copying: 399/1024 [MB] (17 MBps) Copying: 416/1024 [MB] (17 MBps) Copying: 434/1024 [MB] (17 MBps) Copying: 452/1024 [MB] (18 MBps) Copying: 470/1024 [MB] (18 MBps) Copying: 487/1024 [MB] (17 MBps) Copying: 506/1024 [MB] (18 MBps) Copying: 525/1024 [MB] (18 MBps) Copying: 544/1024 [MB] (19 MBps) Copying: 563/1024 [MB] (19 MBps) Copying: 583/1024 [MB] (19 MBps) Copying: 603/1024 [MB] (20 MBps) Copying: 623/1024 [MB] (20 MBps) Copying: 644/1024 [MB] (20 MBps) Copying: 665/1024 [MB] (20 MBps) Copying: 685/1024 [MB] (20 MBps) Copying: 706/1024 [MB] (20 MBps) Copying: 726/1024 [MB] (19 MBps) Copying: 745/1024 [MB] (18 MBps) Copying: 763/1024 [MB] (18 MBps) Copying: 783/1024 [MB] (19 MBps) Copying: 803/1024 [MB] (19 MBps) Copying: 824/1024 [MB] (21 MBps) Copying: 844/1024 [MB] (20 MBps) Copying: 866/1024 [MB] (21 MBps) Copying: 887/1024 [MB] (21 MBps) Copying: 908/1024 [MB] (20 MBps) Copying: 928/1024 [MB] (19 MBps) Copying: 948/1024 [MB] (20 MBps) Copying: 969/1024 [MB] (20 MBps) Copying: 989/1024 [MB] (20 MBps) Copying: 1008/1024 [MB] (18 MBps) Copying: 1024/1024 [MB] (average 19 MBps) 00:49:13.543 00:49:13.543 09:18:15 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:49:13.543 09:18:15 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:49:13.801 09:18:15 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:49:14.059 [2024-04-18 09:18:16.055503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.059 [2024-04-18 09:18:16.055765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:49:14.059 [2024-04-18 09:18:16.055900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:49:14.059 [2024-04-18 09:18:16.055952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.059 [2024-04-18 09:18:16.056092] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:49:14.059 [2024-04-18 09:18:16.060466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.059 [2024-04-18 09:18:16.060653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:49:14.059 [2024-04-18 09:18:16.060756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.285 ms 00:49:14.059 [2024-04-18 09:18:16.060849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.059 [2024-04-18 09:18:16.062604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.059 [2024-04-18 09:18:16.062780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:49:14.059 [2024-04-18 09:18:16.062880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.678 ms 00:49:14.059 [2024-04-18 09:18:16.062923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.059 [2024-04-18 09:18:16.080627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.059 [2024-04-18 09:18:16.080822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:49:14.059 [2024-04-18 09:18:16.080929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.563 ms 00:49:14.059 [2024-04-18 09:18:16.081060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.059 [2024-04-18 09:18:16.087314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.059 [2024-04-18 09:18:16.087511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:49:14.059 [2024-04-18 09:18:16.087619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.165 ms 00:49:14.059 [2024-04-18 09:18:16.087705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.059 [2024-04-18 09:18:16.136121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.059 [2024-04-18 09:18:16.136346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:49:14.059 [2024-04-18 09:18:16.136465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.256 ms 00:49:14.059 [2024-04-18 09:18:16.136510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.319 [2024-04-18 09:18:16.163733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.319 [2024-04-18 09:18:16.163990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:49:14.319 [2024-04-18 09:18:16.164101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.071 ms 00:49:14.319 [2024-04-18 09:18:16.164144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.319 [2024-04-18 09:18:16.164420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.319 [2024-04-18 09:18:16.164479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:49:14.319 [2024-04-18 09:18:16.164520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:49:14.319 [2024-04-18 09:18:16.164609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.319 [2024-04-18 09:18:16.213902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.319 [2024-04-18 09:18:16.214230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:49:14.319 [2024-04-18 09:18:16.214338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.221 ms 00:49:14.319 [2024-04-18 09:18:16.214407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.319 [2024-04-18 09:18:16.263734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.319 [2024-04-18 09:18:16.263987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:49:14.319 [2024-04-18 09:18:16.264100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.097 ms 00:49:14.319 [2024-04-18 09:18:16.264143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.319 [2024-04-18 09:18:16.312986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.319 [2024-04-18 09:18:16.313235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:49:14.319 [2024-04-18 09:18:16.313336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.745 ms 00:49:14.319 [2024-04-18 09:18:16.313393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.319 [2024-04-18 09:18:16.361237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.319 [2024-04-18 09:18:16.361488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:49:14.319 [2024-04-18 09:18:16.361634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.582 ms 00:49:14.319 [2024-04-18 09:18:16.361730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.319 [2024-04-18 09:18:16.361836] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:49:14.319 [2024-04-18 09:18:16.361931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.362039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.362149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.362241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.362300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.362412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.362477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.362545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.362602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.362717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.362794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.362863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.362959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.363031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.363104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.363164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.363237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.363297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.363353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.363434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.363510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.363595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.363724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.363831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.363892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.363987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.364153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.364215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.364271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.364416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.364582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.364707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.364815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.364911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.364973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.365074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.365131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.365205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.365359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.365471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.365568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.365688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.365782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.365880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.365978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.366085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.366179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.366244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.366346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:49:14.319 [2024-04-18 09:18:16.366422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.366481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.366586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.366705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.366829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.366956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.367058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.367177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.367244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.367336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.367416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.367476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.367536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.367611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.367670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.367744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.367803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.367913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.368023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.368090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.368184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.368242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.368316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.368382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.368460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.368518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.368596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.368654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.368729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.368786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.368863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.368920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.368997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.369054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.369161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.369218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.369300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.369357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.369442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.369500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.369573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.369630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.369714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.369771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.369849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.369905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.369999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.370057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.370139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.370196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.370282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:49:14.320 [2024-04-18 09:18:16.370351] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:49:14.320 [2024-04-18 09:18:16.370408] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73c17e2f-1251-4840-aae8-19f53c13030e 00:49:14.320 [2024-04-18 09:18:16.370517] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:49:14.320 [2024-04-18 09:18:16.370626] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:49:14.320 [2024-04-18 09:18:16.370713] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:49:14.320 [2024-04-18 09:18:16.370760] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:49:14.320 [2024-04-18 09:18:16.370811] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:49:14.320 [2024-04-18 09:18:16.370898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:49:14.320 [2024-04-18 09:18:16.370973] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:49:14.320 [2024-04-18 09:18:16.371016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:49:14.320 [2024-04-18 09:18:16.371074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:49:14.320 [2024-04-18 09:18:16.371116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.320 [2024-04-18 09:18:16.371153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:49:14.320 [2024-04-18 09:18:16.371253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.283 ms 00:49:14.320 [2024-04-18 09:18:16.371295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.320 [2024-04-18 09:18:16.395898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.320 [2024-04-18 09:18:16.396167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:49:14.320 [2024-04-18 09:18:16.396274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.411 ms 00:49:14.320 [2024-04-18 09:18:16.396316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.320 [2024-04-18 09:18:16.396770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:14.320 [2024-04-18 09:18:16.396884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:49:14.320 [2024-04-18 09:18:16.396978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:49:14.320 [2024-04-18 09:18:16.397019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.578 [2024-04-18 09:18:16.483798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:14.578 [2024-04-18 09:18:16.484030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:49:14.578 [2024-04-18 09:18:16.484136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:14.578 [2024-04-18 09:18:16.484179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.578 [2024-04-18 09:18:16.484293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:14.578 [2024-04-18 09:18:16.484421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:49:14.578 [2024-04-18 09:18:16.484475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:14.578 [2024-04-18 09:18:16.484511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.578 [2024-04-18 09:18:16.484660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:14.578 [2024-04-18 09:18:16.484770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:49:14.578 [2024-04-18 09:18:16.484817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:14.578 [2024-04-18 09:18:16.484853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.578 [2024-04-18 09:18:16.484909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:14.578 [2024-04-18 09:18:16.484995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:49:14.578 [2024-04-18 09:18:16.485040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:14.578 [2024-04-18 09:18:16.485080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.578 [2024-04-18 09:18:16.631983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:14.578 [2024-04-18 09:18:16.632268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:49:14.578 [2024-04-18 09:18:16.632399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:14.578 [2024-04-18 09:18:16.632448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.836 [2024-04-18 09:18:16.690531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:14.836 [2024-04-18 09:18:16.690723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:49:14.836 [2024-04-18 09:18:16.690828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:14.836 [2024-04-18 09:18:16.690922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.836 [2024-04-18 09:18:16.691071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:14.836 [2024-04-18 09:18:16.691144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:49:14.836 [2024-04-18 09:18:16.691231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:14.836 [2024-04-18 09:18:16.691273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.836 [2024-04-18 09:18:16.691442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:14.836 [2024-04-18 09:18:16.691494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:49:14.836 [2024-04-18 09:18:16.691648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:14.836 [2024-04-18 09:18:16.691691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.836 [2024-04-18 09:18:16.691873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:14.836 [2024-04-18 09:18:16.691927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:49:14.836 [2024-04-18 09:18:16.692028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:14.836 [2024-04-18 09:18:16.692108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.837 [2024-04-18 09:18:16.692201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:14.837 [2024-04-18 09:18:16.692302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:49:14.837 [2024-04-18 09:18:16.692352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:14.837 [2024-04-18 09:18:16.692444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.837 [2024-04-18 09:18:16.692533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:14.837 [2024-04-18 09:18:16.692621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:49:14.837 [2024-04-18 09:18:16.692667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:14.837 [2024-04-18 09:18:16.692734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.837 [2024-04-18 09:18:16.692825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:14.837 [2024-04-18 09:18:16.692874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:49:14.837 [2024-04-18 09:18:16.692926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:14.837 [2024-04-18 09:18:16.692961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:14.837 [2024-04-18 09:18:16.693137] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 637.592 ms, result 0 00:49:14.837 true 00:49:14.837 09:18:16 -- ftl/dirty_shutdown.sh@83 -- # kill -9 81436 00:49:14.837 09:18:16 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81436 00:49:14.837 09:18:16 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:49:14.837 [2024-04-18 09:18:16.832008] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:49:14.837 [2024-04-18 09:18:16.832425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82275 ] 00:49:15.095 [2024-04-18 09:18:17.015770] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:15.354 [2024-04-18 09:18:17.283066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:22.949  Copying: 186/1024 [MB] (186 MBps) Copying: 370/1024 [MB] (183 MBps) Copying: 551/1024 [MB] (180 MBps) Copying: 719/1024 [MB] (167 MBps) Copying: 886/1024 [MB] (167 MBps) Copying: 1024/1024 [MB] (average 177 MBps) 00:49:22.949 00:49:22.949 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81436 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:49:22.949 09:18:24 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:49:23.209 [2024-04-18 09:18:25.062198] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:49:23.209 [2024-04-18 09:18:25.062644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82356 ] 00:49:23.209 [2024-04-18 09:18:25.254230] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:23.468 [2024-04-18 09:18:25.529599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:24.034 [2024-04-18 09:18:25.944303] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:49:24.034 [2024-04-18 09:18:25.944607] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:49:24.034 [2024-04-18 09:18:26.010093] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:49:24.034 [2024-04-18 09:18:26.010643] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:49:24.034 [2024-04-18 09:18:26.010940] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:49:24.294 [2024-04-18 09:18:26.230169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.294 [2024-04-18 09:18:26.230454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:49:24.294 [2024-04-18 09:18:26.230574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:49:24.294 [2024-04-18 09:18:26.230628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.294 [2024-04-18 09:18:26.230807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.294 [2024-04-18 09:18:26.230917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:49:24.294 [2024-04-18 09:18:26.231008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:49:24.294 [2024-04-18 09:18:26.231054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.294 [2024-04-18 09:18:26.231189] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:49:24.294 [2024-04-18 09:18:26.232385] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:49:24.294 [2024-04-18 09:18:26.232563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.294 [2024-04-18 09:18:26.232661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:49:24.294 [2024-04-18 09:18:26.232715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:49:24.294 [2024-04-18 09:18:26.232798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.294 [2024-04-18 09:18:26.234523] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:49:24.294 [2024-04-18 09:18:26.256801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.294 [2024-04-18 09:18:26.257012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:49:24.294 [2024-04-18 09:18:26.257119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.278 ms 00:49:24.294 [2024-04-18 09:18:26.257232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.294 [2024-04-18 09:18:26.257392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.294 [2024-04-18 09:18:26.257466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:49:24.294 [2024-04-18 09:18:26.257613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:49:24.294 [2024-04-18 09:18:26.257673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.294 [2024-04-18 09:18:26.265470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.294 [2024-04-18 09:18:26.265694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:49:24.294 [2024-04-18 09:18:26.265789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.614 ms 00:49:24.294 [2024-04-18 09:18:26.265890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.294 [2024-04-18 09:18:26.266075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.294 [2024-04-18 09:18:26.266142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:49:24.294 [2024-04-18 09:18:26.266293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:49:24.294 [2024-04-18 09:18:26.266348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.294 [2024-04-18 09:18:26.266501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.294 [2024-04-18 09:18:26.266559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:49:24.294 [2024-04-18 09:18:26.266700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:49:24.294 [2024-04-18 09:18:26.266760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.294 [2024-04-18 09:18:26.266856] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:49:24.294 [2024-04-18 09:18:26.273034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.294 [2024-04-18 09:18:26.273213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:49:24.294 [2024-04-18 09:18:26.273323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.185 ms 00:49:24.294 [2024-04-18 09:18:26.273389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.294 [2024-04-18 09:18:26.273516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.294 [2024-04-18 09:18:26.273636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:49:24.294 [2024-04-18 09:18:26.273699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:49:24.294 [2024-04-18 09:18:26.273774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.294 [2024-04-18 09:18:26.273930] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:49:24.294 [2024-04-18 09:18:26.274048] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:49:24.294 [2024-04-18 09:18:26.274203] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:49:24.294 [2024-04-18 09:18:26.274405] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:49:24.294 [2024-04-18 09:18:26.274683] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:49:24.294 [2024-04-18 09:18:26.274833] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:49:24.294 [2024-04-18 09:18:26.274997] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:49:24.294 [2024-04-18 09:18:26.275187] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:49:24.294 [2024-04-18 09:18:26.275294] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:49:24.294 [2024-04-18 09:18:26.275354] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:49:24.294 [2024-04-18 09:18:26.275421] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:49:24.294 [2024-04-18 09:18:26.275482] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:49:24.294 [2024-04-18 09:18:26.275625] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:49:24.294 [2024-04-18 09:18:26.275667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.294 [2024-04-18 09:18:26.275711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:49:24.294 [2024-04-18 09:18:26.275794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.741 ms 00:49:24.294 [2024-04-18 09:18:26.275848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.294 [2024-04-18 09:18:26.275968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.294 [2024-04-18 09:18:26.276039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:49:24.294 [2024-04-18 09:18:26.276090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:49:24.294 [2024-04-18 09:18:26.276190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.295 [2024-04-18 09:18:26.276329] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:49:24.295 [2024-04-18 09:18:26.276460] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:49:24.295 [2024-04-18 09:18:26.276507] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:49:24.295 [2024-04-18 09:18:26.276611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:24.295 [2024-04-18 09:18:26.276663] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:49:24.295 [2024-04-18 09:18:26.276713] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:49:24.295 [2024-04-18 09:18:26.276748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:49:24.295 [2024-04-18 09:18:26.276782] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:49:24.295 [2024-04-18 09:18:26.276830] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:49:24.295 [2024-04-18 09:18:26.276912] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:49:24.295 [2024-04-18 09:18:26.276946] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:49:24.295 [2024-04-18 09:18:26.276986] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:49:24.295 [2024-04-18 09:18:26.277023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:49:24.295 [2024-04-18 09:18:26.277057] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:49:24.295 [2024-04-18 09:18:26.277113] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:49:24.295 [2024-04-18 09:18:26.277152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:24.295 [2024-04-18 09:18:26.277186] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:49:24.295 [2024-04-18 09:18:26.277220] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:49:24.295 [2024-04-18 09:18:26.277305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:24.295 [2024-04-18 09:18:26.277346] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:49:24.295 [2024-04-18 09:18:26.277398] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:49:24.295 [2024-04-18 09:18:26.277517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:49:24.295 [2024-04-18 09:18:26.277559] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:49:24.295 [2024-04-18 09:18:26.277594] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:49:24.295 [2024-04-18 09:18:26.277646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:49:24.295 [2024-04-18 09:18:26.277717] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:49:24.295 [2024-04-18 09:18:26.277757] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:49:24.295 [2024-04-18 09:18:26.277791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:49:24.295 [2024-04-18 09:18:26.277837] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:49:24.295 [2024-04-18 09:18:26.277881] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:49:24.295 [2024-04-18 09:18:26.277926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:49:24.295 [2024-04-18 09:18:26.277970] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:49:24.295 [2024-04-18 09:18:26.278023] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:49:24.295 [2024-04-18 09:18:26.278122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:49:24.295 [2024-04-18 09:18:26.278170] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:49:24.295 [2024-04-18 09:18:26.278206] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:49:24.295 [2024-04-18 09:18:26.278240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:49:24.295 [2024-04-18 09:18:26.278306] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:49:24.295 [2024-04-18 09:18:26.278351] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:49:24.295 [2024-04-18 09:18:26.278409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:49:24.295 [2024-04-18 09:18:26.278446] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:49:24.295 [2024-04-18 09:18:26.278506] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:49:24.295 [2024-04-18 09:18:26.278544] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:49:24.295 [2024-04-18 09:18:26.278580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:24.295 [2024-04-18 09:18:26.278615] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:49:24.295 [2024-04-18 09:18:26.278719] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:49:24.295 [2024-04-18 09:18:26.278761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:49:24.295 [2024-04-18 09:18:26.278832] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:49:24.295 [2024-04-18 09:18:26.278965] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:49:24.295 [2024-04-18 09:18:26.279067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:49:24.295 [2024-04-18 09:18:26.279110] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:49:24.295 [2024-04-18 09:18:26.279225] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:24.295 [2024-04-18 09:18:26.279293] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:49:24.295 [2024-04-18 09:18:26.279365] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:49:24.295 [2024-04-18 09:18:26.279481] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:49:24.295 [2024-04-18 09:18:26.279579] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:49:24.295 [2024-04-18 09:18:26.279641] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:49:24.295 [2024-04-18 09:18:26.279697] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:49:24.295 [2024-04-18 09:18:26.279841] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:49:24.295 [2024-04-18 09:18:26.279956] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:49:24.295 [2024-04-18 09:18:26.280091] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:49:24.295 [2024-04-18 09:18:26.280259] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:49:24.295 [2024-04-18 09:18:26.280437] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:49:24.295 [2024-04-18 09:18:26.280560] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:49:24.295 [2024-04-18 09:18:26.280648] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:49:24.295 [2024-04-18 09:18:26.280817] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:49:24.295 [2024-04-18 09:18:26.280936] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:24.295 [2024-04-18 09:18:26.281063] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:49:24.295 [2024-04-18 09:18:26.281244] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:49:24.295 [2024-04-18 09:18:26.281329] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:49:24.295 [2024-04-18 09:18:26.281507] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:49:24.295 [2024-04-18 09:18:26.281570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.295 [2024-04-18 09:18:26.281606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:49:24.295 [2024-04-18 09:18:26.281743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.287 ms 00:49:24.295 [2024-04-18 09:18:26.281789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.295 [2024-04-18 09:18:26.309130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.295 [2024-04-18 09:18:26.309515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:49:24.295 [2024-04-18 09:18:26.309664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.221 ms 00:49:24.295 [2024-04-18 09:18:26.309719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.295 [2024-04-18 09:18:26.309924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.295 [2024-04-18 09:18:26.310117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:49:24.295 [2024-04-18 09:18:26.310244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:49:24.295 [2024-04-18 09:18:26.310353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.295 [2024-04-18 09:18:26.394185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.295 [2024-04-18 09:18:26.394515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:49:24.295 [2024-04-18 09:18:26.394653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.666 ms 00:49:24.295 [2024-04-18 09:18:26.394751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.295 [2024-04-18 09:18:26.394974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.295 [2024-04-18 09:18:26.395090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:49:24.295 [2024-04-18 09:18:26.395275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:49:24.295 [2024-04-18 09:18:26.395473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.553 [2024-04-18 09:18:26.396318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.553 [2024-04-18 09:18:26.396518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:49:24.553 [2024-04-18 09:18:26.396645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:49:24.553 [2024-04-18 09:18:26.396703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.553 [2024-04-18 09:18:26.396977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.553 [2024-04-18 09:18:26.397046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:49:24.553 [2024-04-18 09:18:26.397174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:49:24.553 [2024-04-18 09:18:26.397231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.553 [2024-04-18 09:18:26.429528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.553 [2024-04-18 09:18:26.429807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:49:24.553 [2024-04-18 09:18:26.429950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.220 ms 00:49:24.553 [2024-04-18 09:18:26.430067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.553 [2024-04-18 09:18:26.459245] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:49:24.553 [2024-04-18 09:18:26.459564] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:49:24.553 [2024-04-18 09:18:26.459747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.553 [2024-04-18 09:18:26.459819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:49:24.553 [2024-04-18 09:18:26.459881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.384 ms 00:49:24.553 [2024-04-18 09:18:26.459935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.553 [2024-04-18 09:18:26.508081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.553 [2024-04-18 09:18:26.508354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:49:24.553 [2024-04-18 09:18:26.508520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.003 ms 00:49:24.553 [2024-04-18 09:18:26.508593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.553 [2024-04-18 09:18:26.536866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.553 [2024-04-18 09:18:26.537109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:49:24.553 [2024-04-18 09:18:26.537236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.103 ms 00:49:24.553 [2024-04-18 09:18:26.537292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.553 [2024-04-18 09:18:26.565559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.553 [2024-04-18 09:18:26.565793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:49:24.553 [2024-04-18 09:18:26.565921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.096 ms 00:49:24.553 [2024-04-18 09:18:26.565984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.553 [2024-04-18 09:18:26.566874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.553 [2024-04-18 09:18:26.567076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:49:24.553 [2024-04-18 09:18:26.567272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:49:24.553 [2024-04-18 09:18:26.567467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.813 [2024-04-18 09:18:26.676720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.813 [2024-04-18 09:18:26.676981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:49:24.813 [2024-04-18 09:18:26.677081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.150 ms 00:49:24.813 [2024-04-18 09:18:26.677125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.813 [2024-04-18 09:18:26.690994] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:49:24.813 [2024-04-18 09:18:26.694703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.813 [2024-04-18 09:18:26.694880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:49:24.813 [2024-04-18 09:18:26.694996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.380 ms 00:49:24.813 [2024-04-18 09:18:26.695102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.813 [2024-04-18 09:18:26.695271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.813 [2024-04-18 09:18:26.695334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:49:24.813 [2024-04-18 09:18:26.695451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:49:24.813 [2024-04-18 09:18:26.695632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.813 [2024-04-18 09:18:26.695849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.813 [2024-04-18 09:18:26.695968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:49:24.813 [2024-04-18 09:18:26.696085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:49:24.813 [2024-04-18 09:18:26.696137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.813 [2024-04-18 09:18:26.698742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.813 [2024-04-18 09:18:26.698881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:49:24.813 [2024-04-18 09:18:26.698978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.481 ms 00:49:24.813 [2024-04-18 09:18:26.699030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.813 [2024-04-18 09:18:26.699160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.813 [2024-04-18 09:18:26.699210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:49:24.813 [2024-04-18 09:18:26.699257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:49:24.813 [2024-04-18 09:18:26.699302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.813 [2024-04-18 09:18:26.699418] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:49:24.813 [2024-04-18 09:18:26.699473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.813 [2024-04-18 09:18:26.699526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:49:24.813 [2024-04-18 09:18:26.699631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:49:24.813 [2024-04-18 09:18:26.699678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.813 [2024-04-18 09:18:26.743684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.813 [2024-04-18 09:18:26.743955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:49:24.813 [2024-04-18 09:18:26.744103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.916 ms 00:49:24.813 [2024-04-18 09:18:26.744232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.813 [2024-04-18 09:18:26.744369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:24.813 [2024-04-18 09:18:26.744438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:49:24.813 [2024-04-18 09:18:26.744525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:49:24.813 [2024-04-18 09:18:26.744668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:24.813 [2024-04-18 09:18:26.746122] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 515.370 ms, result 0 00:49:54.203  Copying: 36/1024 [MB] (36 MBps) Copying: 70/1024 [MB] (33 MBps) Copying: 104/1024 [MB] (33 MBps) Copying: 140/1024 [MB] (36 MBps) Copying: 177/1024 [MB] (37 MBps) Copying: 213/1024 [MB] (35 MBps) Copying: 247/1024 [MB] (34 MBps) Copying: 283/1024 [MB] (36 MBps) Copying: 318/1024 [MB] (34 MBps) Copying: 354/1024 [MB] (35 MBps) Copying: 390/1024 [MB] (36 MBps) Copying: 426/1024 [MB] (35 MBps) Copying: 462/1024 [MB] (36 MBps) Copying: 498/1024 [MB] (35 MBps) Copying: 535/1024 [MB] (36 MBps) Copying: 571/1024 [MB] (36 MBps) Copying: 607/1024 [MB] (35 MBps) Copying: 641/1024 [MB] (33 MBps) Copying: 678/1024 [MB] (37 MBps) Copying: 714/1024 [MB] (35 MBps) Copying: 750/1024 [MB] (36 MBps) Copying: 786/1024 [MB] (36 MBps) Copying: 824/1024 [MB] (37 MBps) Copying: 861/1024 [MB] (36 MBps) Copying: 898/1024 [MB] (37 MBps) Copying: 934/1024 [MB] (36 MBps) Copying: 970/1024 [MB] (35 MBps) Copying: 1006/1024 [MB] (35 MBps) Copying: 1023/1024 [MB] (17 MBps) Copying: 1024/1024 [MB] (average 34 MBps)[2024-04-18 09:18:56.236850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:54.203 [2024-04-18 09:18:56.237088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:49:54.203 [2024-04-18 09:18:56.237221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:49:54.203 [2024-04-18 09:18:56.237266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.203 [2024-04-18 09:18:56.240338] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:49:54.203 [2024-04-18 09:18:56.247120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:54.203 [2024-04-18 09:18:56.247274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:49:54.203 [2024-04-18 09:18:56.247412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.549 ms 00:49:54.203 [2024-04-18 09:18:56.247450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.203 [2024-04-18 09:18:56.257629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:54.203 [2024-04-18 09:18:56.257840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:49:54.203 [2024-04-18 09:18:56.257968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.307 ms 00:49:54.203 [2024-04-18 09:18:56.258017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.204 [2024-04-18 09:18:56.278804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:54.204 [2024-04-18 09:18:56.279102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:49:54.204 [2024-04-18 09:18:56.279212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.735 ms 00:49:54.204 [2024-04-18 09:18:56.279254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.204 [2024-04-18 09:18:56.284736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:54.204 [2024-04-18 09:18:56.284878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:49:54.204 [2024-04-18 09:18:56.284971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.368 ms 00:49:54.204 [2024-04-18 09:18:56.285025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.464 [2024-04-18 09:18:56.328441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:54.464 [2024-04-18 09:18:56.328655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:49:54.464 [2024-04-18 09:18:56.328755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.269 ms 00:49:54.464 [2024-04-18 09:18:56.328795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.464 [2024-04-18 09:18:56.353018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:54.464 [2024-04-18 09:18:56.353284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:49:54.464 [2024-04-18 09:18:56.353457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.084 ms 00:49:54.464 [2024-04-18 09:18:56.353499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.464 [2024-04-18 09:18:56.429279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:54.464 [2024-04-18 09:18:56.429583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:49:54.464 [2024-04-18 09:18:56.429673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.684 ms 00:49:54.464 [2024-04-18 09:18:56.429730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.464 [2024-04-18 09:18:56.474508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:54.464 [2024-04-18 09:18:56.474758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:49:54.464 [2024-04-18 09:18:56.474889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.720 ms 00:49:54.464 [2024-04-18 09:18:56.474931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.464 [2024-04-18 09:18:56.522065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:54.464 [2024-04-18 09:18:56.522302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:49:54.464 [2024-04-18 09:18:56.522417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.050 ms 00:49:54.464 [2024-04-18 09:18:56.522460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.726 [2024-04-18 09:18:56.567583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:54.726 [2024-04-18 09:18:56.567800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:49:54.726 [2024-04-18 09:18:56.567904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.035 ms 00:49:54.726 [2024-04-18 09:18:56.568026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.726 [2024-04-18 09:18:56.613131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:54.726 [2024-04-18 09:18:56.613309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:49:54.726 [2024-04-18 09:18:56.613405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.946 ms 00:49:54.726 [2024-04-18 09:18:56.613444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.726 [2024-04-18 09:18:56.613523] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:49:54.726 [2024-04-18 09:18:56.613571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 118272 / 261120 wr_cnt: 1 state: open 00:49:54.726 [2024-04-18 09:18:56.613765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.613838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.613892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.613947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.614075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.614133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.614188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.614312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.614383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.614444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.614544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.614606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.614662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.614718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.614836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.614892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.614948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.615082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.615177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.615243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.615346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.615418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.615512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.615616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.615709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.615815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.615875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.615966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.616092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.616186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.616247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.616420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.616479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.616536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.616591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.616702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.616761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.616818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.616918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.616980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.617036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.617102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.617204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.617261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.617315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.617369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.617504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.617559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.617612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.617714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.617857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.617914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.617969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.618087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.618152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.618207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.618261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.618365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.618444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.618499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.618553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.618670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.618724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.618778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.618918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.619011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.619069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.619186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.619251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.619305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.619359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.619433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.619578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.619632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.619688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.619742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.619852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.619909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.619964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.620104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.620159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.620255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.620382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.620458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.620513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.620567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.620621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.620729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.620786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.620841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.620964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.621018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.621112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.621212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.621272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.621365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.621521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.621582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.621674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:49:54.726 [2024-04-18 09:18:56.621788] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:49:54.726 [2024-04-18 09:18:56.621841] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73c17e2f-1251-4840-aae8-19f53c13030e 00:49:54.726 [2024-04-18 09:18:56.621967] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 118272 00:49:54.726 [2024-04-18 09:18:56.622035] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 119232 00:49:54.726 [2024-04-18 09:18:56.622075] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 118272 00:49:54.726 [2024-04-18 09:18:56.622112] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:49:54.726 [2024-04-18 09:18:56.622163] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:49:54.726 [2024-04-18 09:18:56.622256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:49:54.726 [2024-04-18 09:18:56.622302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:49:54.726 [2024-04-18 09:18:56.622350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:49:54.726 [2024-04-18 09:18:56.622392] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:49:54.726 [2024-04-18 09:18:56.622430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:54.726 [2024-04-18 09:18:56.622464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:49:54.726 [2024-04-18 09:18:56.622499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.908 ms 00:49:54.726 [2024-04-18 09:18:56.622530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.726 [2024-04-18 09:18:56.645359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:54.726 [2024-04-18 09:18:56.645583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:49:54.726 [2024-04-18 09:18:56.645685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.680 ms 00:49:54.726 [2024-04-18 09:18:56.645725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.726 [2024-04-18 09:18:56.646086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:54.727 [2024-04-18 09:18:56.646123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:49:54.727 [2024-04-18 09:18:56.646196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:49:54.727 [2024-04-18 09:18:56.646253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.727 [2024-04-18 09:18:56.709302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:54.727 [2024-04-18 09:18:56.709571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:49:54.727 [2024-04-18 09:18:56.709676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:54.727 [2024-04-18 09:18:56.709716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.727 [2024-04-18 09:18:56.709818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:54.727 [2024-04-18 09:18:56.709852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:49:54.727 [2024-04-18 09:18:56.709886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:54.727 [2024-04-18 09:18:56.709976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.727 [2024-04-18 09:18:56.710106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:54.727 [2024-04-18 09:18:56.710143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:49:54.727 [2024-04-18 09:18:56.710178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:54.727 [2024-04-18 09:18:56.710255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.727 [2024-04-18 09:18:56.710301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:54.727 [2024-04-18 09:18:56.710333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:49:54.727 [2024-04-18 09:18:56.710362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:54.727 [2024-04-18 09:18:56.710406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.986 [2024-04-18 09:18:56.836835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:54.986 [2024-04-18 09:18:56.837078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:49:54.986 [2024-04-18 09:18:56.837175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:54.986 [2024-04-18 09:18:56.837214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.986 [2024-04-18 09:18:56.891253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:54.986 [2024-04-18 09:18:56.891490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:49:54.986 [2024-04-18 09:18:56.891591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:54.986 [2024-04-18 09:18:56.891630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.986 [2024-04-18 09:18:56.891729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:54.986 [2024-04-18 09:18:56.891780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:49:54.986 [2024-04-18 09:18:56.891832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:54.986 [2024-04-18 09:18:56.891877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.986 [2024-04-18 09:18:56.891937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:54.986 [2024-04-18 09:18:56.892045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:49:54.986 [2024-04-18 09:18:56.892087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:54.986 [2024-04-18 09:18:56.892120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.986 [2024-04-18 09:18:56.892281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:54.986 [2024-04-18 09:18:56.892322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:49:54.986 [2024-04-18 09:18:56.892357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:54.986 [2024-04-18 09:18:56.892410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.986 [2024-04-18 09:18:56.892551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:54.986 [2024-04-18 09:18:56.892594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:49:54.986 [2024-04-18 09:18:56.892629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:54.986 [2024-04-18 09:18:56.892751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.986 [2024-04-18 09:18:56.892825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:54.986 [2024-04-18 09:18:56.892862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:49:54.986 [2024-04-18 09:18:56.892941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:54.986 [2024-04-18 09:18:56.893019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.986 [2024-04-18 09:18:56.893107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:54.986 [2024-04-18 09:18:56.893146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:49:54.986 [2024-04-18 09:18:56.893180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:54.986 [2024-04-18 09:18:56.893253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:54.986 [2024-04-18 09:18:56.893454] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 658.408 ms, result 0 00:49:57.519 00:49:57.519 00:49:57.519 09:18:59 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:49:59.497 09:19:01 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:49:59.497 [2024-04-18 09:19:01.300342] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:49:59.497 [2024-04-18 09:19:01.300750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82715 ] 00:49:59.497 [2024-04-18 09:19:01.486553] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:59.755 [2024-04-18 09:19:01.814969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:00.323 [2024-04-18 09:19:02.283223] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:00.323 [2024-04-18 09:19:02.283542] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:00.584 [2024-04-18 09:19:02.446594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.584 [2024-04-18 09:19:02.446854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:50:00.584 [2024-04-18 09:19:02.446969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:50:00.584 [2024-04-18 09:19:02.447018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.584 [2024-04-18 09:19:02.447146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.584 [2024-04-18 09:19:02.447261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:00.584 [2024-04-18 09:19:02.447462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:50:00.584 [2024-04-18 09:19:02.447506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.584 [2024-04-18 09:19:02.447571] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:50:00.584 [2024-04-18 09:19:02.448867] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:50:00.584 [2024-04-18 09:19:02.449046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.584 [2024-04-18 09:19:02.449137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:00.584 [2024-04-18 09:19:02.449181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.482 ms 00:50:00.584 [2024-04-18 09:19:02.449294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.584 [2024-04-18 09:19:02.451009] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:50:00.584 [2024-04-18 09:19:02.474609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.584 [2024-04-18 09:19:02.474833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:50:00.584 [2024-04-18 09:19:02.474971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.601 ms 00:50:00.584 [2024-04-18 09:19:02.475015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.584 [2024-04-18 09:19:02.475108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.584 [2024-04-18 09:19:02.475217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:50:00.584 [2024-04-18 09:19:02.475261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:50:00.584 [2024-04-18 09:19:02.475296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.584 [2024-04-18 09:19:02.482762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.584 [2024-04-18 09:19:02.482956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:00.584 [2024-04-18 09:19:02.483073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.282 ms 00:50:00.584 [2024-04-18 09:19:02.483117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.584 [2024-04-18 09:19:02.483262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.584 [2024-04-18 09:19:02.483353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:00.584 [2024-04-18 09:19:02.483419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:50:00.584 [2024-04-18 09:19:02.483458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.584 [2024-04-18 09:19:02.483544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.584 [2024-04-18 09:19:02.483590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:50:00.584 [2024-04-18 09:19:02.483626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:50:00.584 [2024-04-18 09:19:02.483738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.584 [2024-04-18 09:19:02.483808] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:50:00.584 [2024-04-18 09:19:02.490896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.584 [2024-04-18 09:19:02.491075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:00.584 [2024-04-18 09:19:02.491163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.096 ms 00:50:00.584 [2024-04-18 09:19:02.491213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.584 [2024-04-18 09:19:02.491292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.584 [2024-04-18 09:19:02.491438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:50:00.584 [2024-04-18 09:19:02.491491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:50:00.584 [2024-04-18 09:19:02.491525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.584 [2024-04-18 09:19:02.491634] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:50:00.584 [2024-04-18 09:19:02.491704] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:50:00.584 [2024-04-18 09:19:02.491879] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:50:00.584 [2024-04-18 09:19:02.491955] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:50:00.584 [2024-04-18 09:19:02.492105] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:50:00.584 [2024-04-18 09:19:02.492167] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:50:00.584 [2024-04-18 09:19:02.492225] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:50:00.584 [2024-04-18 09:19:02.492335] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:50:00.584 [2024-04-18 09:19:02.492420] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:50:00.584 [2024-04-18 09:19:02.492594] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:50:00.584 [2024-04-18 09:19:02.492629] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:50:00.584 [2024-04-18 09:19:02.492663] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:50:00.584 [2024-04-18 09:19:02.492698] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:50:00.584 [2024-04-18 09:19:02.492733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.584 [2024-04-18 09:19:02.492768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:50:00.584 [2024-04-18 09:19:02.492806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:50:00.584 [2024-04-18 09:19:02.492939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.584 [2024-04-18 09:19:02.493084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.584 [2024-04-18 09:19:02.493127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:50:00.584 [2024-04-18 09:19:02.493167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:50:00.584 [2024-04-18 09:19:02.493202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.584 [2024-04-18 09:19:02.493311] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:50:00.584 [2024-04-18 09:19:02.493383] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:50:00.584 [2024-04-18 09:19:02.493457] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:00.584 [2024-04-18 09:19:02.493493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:00.584 [2024-04-18 09:19:02.493529] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:50:00.584 [2024-04-18 09:19:02.493564] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:50:00.584 [2024-04-18 09:19:02.493598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:50:00.584 [2024-04-18 09:19:02.493633] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:50:00.584 [2024-04-18 09:19:02.493733] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:50:00.584 [2024-04-18 09:19:02.493798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:00.584 [2024-04-18 09:19:02.493833] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:50:00.584 [2024-04-18 09:19:02.493868] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:50:00.584 [2024-04-18 09:19:02.493916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:00.584 [2024-04-18 09:19:02.493951] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:50:00.584 [2024-04-18 09:19:02.493985] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:50:00.584 [2024-04-18 09:19:02.494074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:00.584 [2024-04-18 09:19:02.494139] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:50:00.584 [2024-04-18 09:19:02.494173] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:50:00.584 [2024-04-18 09:19:02.494206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:00.584 [2024-04-18 09:19:02.494238] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:50:00.584 [2024-04-18 09:19:02.494271] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:50:00.584 [2024-04-18 09:19:02.494304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:50:00.584 [2024-04-18 09:19:02.494482] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:50:00.584 [2024-04-18 09:19:02.494530] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:50:00.584 [2024-04-18 09:19:02.494565] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:50:00.584 [2024-04-18 09:19:02.494600] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:50:00.584 [2024-04-18 09:19:02.494663] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:50:00.584 [2024-04-18 09:19:02.494806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:50:00.584 [2024-04-18 09:19:02.494846] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:50:00.584 [2024-04-18 09:19:02.494882] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:50:00.584 [2024-04-18 09:19:02.494916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:50:00.584 [2024-04-18 09:19:02.494951] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:50:00.585 [2024-04-18 09:19:02.495049] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:50:00.585 [2024-04-18 09:19:02.495102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:50:00.585 [2024-04-18 09:19:02.495149] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:50:00.585 [2024-04-18 09:19:02.495194] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:50:00.585 [2024-04-18 09:19:02.495240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:00.585 [2024-04-18 09:19:02.495340] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:50:00.585 [2024-04-18 09:19:02.495398] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:50:00.585 [2024-04-18 09:19:02.495449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:00.585 [2024-04-18 09:19:02.495565] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:50:00.585 [2024-04-18 09:19:02.495612] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:50:00.585 [2024-04-18 09:19:02.495699] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:00.585 [2024-04-18 09:19:02.495772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:00.585 [2024-04-18 09:19:02.495808] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:50:00.585 [2024-04-18 09:19:02.495843] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:50:00.585 [2024-04-18 09:19:02.495877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:50:00.585 [2024-04-18 09:19:02.495912] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:50:00.585 [2024-04-18 09:19:02.495955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:50:00.585 [2024-04-18 09:19:02.496078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:50:00.585 [2024-04-18 09:19:02.496147] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:50:00.585 [2024-04-18 09:19:02.496207] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:00.585 [2024-04-18 09:19:02.496265] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:50:00.585 [2024-04-18 09:19:02.496321] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:50:00.585 [2024-04-18 09:19:02.496486] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:50:00.585 [2024-04-18 09:19:02.496543] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:50:00.585 [2024-04-18 09:19:02.496599] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:50:00.585 [2024-04-18 09:19:02.496655] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:50:00.585 [2024-04-18 09:19:02.496775] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:50:00.585 [2024-04-18 09:19:02.496838] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:50:00.585 [2024-04-18 09:19:02.496894] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:50:00.585 [2024-04-18 09:19:02.497004] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:50:00.585 [2024-04-18 09:19:02.497066] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:50:00.585 [2024-04-18 09:19:02.497122] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:50:00.585 [2024-04-18 09:19:02.497246] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:50:00.585 [2024-04-18 09:19:02.497303] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:50:00.585 [2024-04-18 09:19:02.497361] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:00.585 [2024-04-18 09:19:02.497530] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:50:00.585 [2024-04-18 09:19:02.497651] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:50:00.585 [2024-04-18 09:19:02.497710] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:50:00.585 [2024-04-18 09:19:02.497817] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:50:00.585 [2024-04-18 09:19:02.497878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.585 [2024-04-18 09:19:02.497914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:50:00.585 [2024-04-18 09:19:02.497994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.611 ms 00:50:00.585 [2024-04-18 09:19:02.498035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.585 [2024-04-18 09:19:02.526444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.585 [2024-04-18 09:19:02.526657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:00.585 [2024-04-18 09:19:02.526786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.308 ms 00:50:00.585 [2024-04-18 09:19:02.526826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.585 [2024-04-18 09:19:02.526952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.585 [2024-04-18 09:19:02.526994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:50:00.585 [2024-04-18 09:19:02.527081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:50:00.585 [2024-04-18 09:19:02.527119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.585 [2024-04-18 09:19:02.595748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.585 [2024-04-18 09:19:02.595989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:00.585 [2024-04-18 09:19:02.596185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.526 ms 00:50:00.585 [2024-04-18 09:19:02.596252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.585 [2024-04-18 09:19:02.596344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.585 [2024-04-18 09:19:02.596463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:00.585 [2024-04-18 09:19:02.596508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:50:00.585 [2024-04-18 09:19:02.596543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.585 [2024-04-18 09:19:02.597138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.585 [2024-04-18 09:19:02.597259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:00.585 [2024-04-18 09:19:02.597335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:50:00.585 [2024-04-18 09:19:02.597387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.585 [2024-04-18 09:19:02.597623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.585 [2024-04-18 09:19:02.597727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:00.585 [2024-04-18 09:19:02.597807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:50:00.585 [2024-04-18 09:19:02.597847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.585 [2024-04-18 09:19:02.623056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.585 [2024-04-18 09:19:02.623280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:00.585 [2024-04-18 09:19:02.623366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.154 ms 00:50:00.585 [2024-04-18 09:19:02.623427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.585 [2024-04-18 09:19:02.646424] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:50:00.585 [2024-04-18 09:19:02.646659] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:50:00.585 [2024-04-18 09:19:02.646842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.585 [2024-04-18 09:19:02.646878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:50:00.585 [2024-04-18 09:19:02.646947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.244 ms 00:50:00.585 [2024-04-18 09:19:02.646985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.585 [2024-04-18 09:19:02.684296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.585 [2024-04-18 09:19:02.684556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:50:00.585 [2024-04-18 09:19:02.684651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.222 ms 00:50:00.585 [2024-04-18 09:19:02.684695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.888 [2024-04-18 09:19:02.708191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.888 [2024-04-18 09:19:02.708391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:50:00.888 [2024-04-18 09:19:02.708514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.396 ms 00:50:00.888 [2024-04-18 09:19:02.708594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.888 [2024-04-18 09:19:02.729943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.888 [2024-04-18 09:19:02.730200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:50:00.888 [2024-04-18 09:19:02.730297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.258 ms 00:50:00.888 [2024-04-18 09:19:02.730340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.888 [2024-04-18 09:19:02.730974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.888 [2024-04-18 09:19:02.731121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:50:00.888 [2024-04-18 09:19:02.731210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:50:00.888 [2024-04-18 09:19:02.731304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.888 [2024-04-18 09:19:02.838023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.888 [2024-04-18 09:19:02.838260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:50:00.888 [2024-04-18 09:19:02.838354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.625 ms 00:50:00.888 [2024-04-18 09:19:02.838412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.888 [2024-04-18 09:19:02.854137] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:50:00.888 [2024-04-18 09:19:02.857852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.888 [2024-04-18 09:19:02.858029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:50:00.888 [2024-04-18 09:19:02.858158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.352 ms 00:50:00.888 [2024-04-18 09:19:02.858197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.888 [2024-04-18 09:19:02.858337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.888 [2024-04-18 09:19:02.858488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:50:00.888 [2024-04-18 09:19:02.858542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:50:00.888 [2024-04-18 09:19:02.858574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.888 [2024-04-18 09:19:02.860091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.888 [2024-04-18 09:19:02.860262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:50:00.888 [2024-04-18 09:19:02.860357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.438 ms 00:50:00.888 [2024-04-18 09:19:02.860460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.888 [2024-04-18 09:19:02.862792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.888 [2024-04-18 09:19:02.862915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:50:00.888 [2024-04-18 09:19:02.863002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.259 ms 00:50:00.888 [2024-04-18 09:19:02.863039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.888 [2024-04-18 09:19:02.863101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.888 [2024-04-18 09:19:02.863139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:50:00.888 [2024-04-18 09:19:02.863174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:50:00.888 [2024-04-18 09:19:02.863206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.888 [2024-04-18 09:19:02.863340] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:50:00.888 [2024-04-18 09:19:02.863404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.888 [2024-04-18 09:19:02.863439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:50:00.888 [2024-04-18 09:19:02.863524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:50:00.888 [2024-04-18 09:19:02.863567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.888 [2024-04-18 09:19:02.908606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.888 [2024-04-18 09:19:02.908835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:50:00.888 [2024-04-18 09:19:02.908932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.979 ms 00:50:00.888 [2024-04-18 09:19:02.908986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.888 [2024-04-18 09:19:02.909103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:00.888 [2024-04-18 09:19:02.909155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:50:00.888 [2024-04-18 09:19:02.909194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:50:00.889 [2024-04-18 09:19:02.909229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:00.889 [2024-04-18 09:19:02.916763] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 467.589 ms, result 0 00:50:30.019  Copying: 964/1048576 [kB] (964 kBps) Copying: 5164/1048576 [kB] (4200 kBps) Copying: 38/1024 [MB] (33 MBps) Copying: 78/1024 [MB] (39 MBps) Copying: 118/1024 [MB] (39 MBps) Copying: 156/1024 [MB] (38 MBps) Copying: 194/1024 [MB] (37 MBps) Copying: 233/1024 [MB] (39 MBps) Copying: 273/1024 [MB] (39 MBps) Copying: 312/1024 [MB] (38 MBps) Copying: 351/1024 [MB] (39 MBps) Copying: 387/1024 [MB] (35 MBps) Copying: 426/1024 [MB] (39 MBps) Copying: 466/1024 [MB] (39 MBps) Copying: 506/1024 [MB] (39 MBps) Copying: 546/1024 [MB] (39 MBps) Copying: 585/1024 [MB] (39 MBps) Copying: 622/1024 [MB] (36 MBps) Copying: 658/1024 [MB] (35 MBps) Copying: 699/1024 [MB] (41 MBps) Copying: 741/1024 [MB] (41 MBps) Copying: 782/1024 [MB] (40 MBps) Copying: 822/1024 [MB] (40 MBps) Copying: 862/1024 [MB] (39 MBps) Copying: 898/1024 [MB] (36 MBps) Copying: 932/1024 [MB] (34 MBps) Copying: 967/1024 [MB] (34 MBps) Copying: 1002/1024 [MB] (35 MBps) Copying: 1024/1024 [MB] (average 35 MBps)[2024-04-18 09:19:32.008502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:30.019 [2024-04-18 09:19:32.008877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:50:30.019 [2024-04-18 09:19:32.009054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:50:30.019 [2024-04-18 09:19:32.009185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.019 [2024-04-18 09:19:32.009357] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:50:30.019 [2024-04-18 09:19:32.017102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:30.019 [2024-04-18 09:19:32.017393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:50:30.019 [2024-04-18 09:19:32.017562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.517 ms 00:50:30.019 [2024-04-18 09:19:32.017687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.019 [2024-04-18 09:19:32.018065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:30.019 [2024-04-18 09:19:32.018233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:50:30.019 [2024-04-18 09:19:32.018407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:50:30.019 [2024-04-18 09:19:32.018482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.019 [2024-04-18 09:19:32.030944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:30.019 [2024-04-18 09:19:32.031231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:50:30.019 [2024-04-18 09:19:32.031403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.359 ms 00:50:30.019 [2024-04-18 09:19:32.031568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.019 [2024-04-18 09:19:32.040196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:30.019 [2024-04-18 09:19:32.040443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:50:30.019 [2024-04-18 09:19:32.040571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.515 ms 00:50:30.019 [2024-04-18 09:19:32.040639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.019 [2024-04-18 09:19:32.100886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:30.019 [2024-04-18 09:19:32.101232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:50:30.019 [2024-04-18 09:19:32.101396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.042 ms 00:50:30.019 [2024-04-18 09:19:32.101458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.277 [2024-04-18 09:19:32.134446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:30.277 [2024-04-18 09:19:32.134719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:50:30.277 [2024-04-18 09:19:32.134861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.803 ms 00:50:30.277 [2024-04-18 09:19:32.134917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.277 [2024-04-18 09:19:32.138557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:30.277 [2024-04-18 09:19:32.138737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:50:30.277 [2024-04-18 09:19:32.138849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.466 ms 00:50:30.277 [2024-04-18 09:19:32.138906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.277 [2024-04-18 09:19:32.201897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:30.277 [2024-04-18 09:19:32.202143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:50:30.277 [2024-04-18 09:19:32.202336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.851 ms 00:50:30.277 [2024-04-18 09:19:32.202422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.277 [2024-04-18 09:19:32.267413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:30.277 [2024-04-18 09:19:32.267682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:50:30.277 [2024-04-18 09:19:32.267876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.881 ms 00:50:30.277 [2024-04-18 09:19:32.267933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.277 [2024-04-18 09:19:32.333132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:30.277 [2024-04-18 09:19:32.334609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:50:30.277 [2024-04-18 09:19:32.334764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.068 ms 00:50:30.277 [2024-04-18 09:19:32.334819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.542 [2024-04-18 09:19:32.383008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:30.542 [2024-04-18 09:19:32.383240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:50:30.542 [2024-04-18 09:19:32.383356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.003 ms 00:50:30.542 [2024-04-18 09:19:32.383420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.542 [2024-04-18 09:19:32.383504] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:50:30.542 [2024-04-18 09:19:32.383606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:50:30.542 [2024-04-18 09:19:32.383669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:50:30.542 [2024-04-18 09:19:32.383760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.383873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.383969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.384041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.384150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.384207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.384333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.384509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.384807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.384903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.385212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.385435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.385555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.385620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.385714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.385821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.385915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.385975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.386075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.386133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.386230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.386396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.386454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.386505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.386613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.386692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.386743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.386793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.386845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.386994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.387054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.387160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.387264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.387325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.387526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.387599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.387739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.387824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.387888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.387950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.388091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.388156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.388210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.388319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.388388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:50:30.542 [2024-04-18 09:19:32.388447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.388586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.388699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.388761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.388902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.388987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.389041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.389095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.389202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.389264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.389317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.389439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.389494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.389589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.389691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.389781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.389840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.389936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.389992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.390083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.390180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.390239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.390330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.390398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.390555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.390609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.390663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.390717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.390862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.390916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.390986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.391096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.391196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.391287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.391475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.391569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.391696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.391820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.391888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.392082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.392161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.392216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.392270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.392406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.392472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.392529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.392631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.392692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.392746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.392857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.392914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.393014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.393123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:50:30.543 [2024-04-18 09:19:32.393226] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:50:30.543 [2024-04-18 09:19:32.393268] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73c17e2f-1251-4840-aae8-19f53c13030e 00:50:30.543 [2024-04-18 09:19:32.393369] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:50:30.543 [2024-04-18 09:19:32.393423] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 148672 00:50:30.543 [2024-04-18 09:19:32.393457] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 146688 00:50:30.543 [2024-04-18 09:19:32.393529] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0135 00:50:30.543 [2024-04-18 09:19:32.393627] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:50:30.543 [2024-04-18 09:19:32.393669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:50:30.543 [2024-04-18 09:19:32.393747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:50:30.543 [2024-04-18 09:19:32.393786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:50:30.543 [2024-04-18 09:19:32.393871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:50:30.543 [2024-04-18 09:19:32.393933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:30.543 [2024-04-18 09:19:32.393967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:50:30.543 [2024-04-18 09:19:32.394004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.429 ms 00:50:30.543 [2024-04-18 09:19:32.394038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.543 [2024-04-18 09:19:32.415444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:30.543 [2024-04-18 09:19:32.415656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:50:30.543 [2024-04-18 09:19:32.415736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.315 ms 00:50:30.543 [2024-04-18 09:19:32.415826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.543 [2024-04-18 09:19:32.416203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:30.543 [2024-04-18 09:19:32.416312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:50:30.543 [2024-04-18 09:19:32.416411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:50:30.543 [2024-04-18 09:19:32.416522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.543 [2024-04-18 09:19:32.479593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:30.543 [2024-04-18 09:19:32.479842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:30.543 [2024-04-18 09:19:32.480010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:30.543 [2024-04-18 09:19:32.480119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.543 [2024-04-18 09:19:32.480229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:30.543 [2024-04-18 09:19:32.480315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:30.543 [2024-04-18 09:19:32.480357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:30.543 [2024-04-18 09:19:32.480448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.543 [2024-04-18 09:19:32.480646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:30.543 [2024-04-18 09:19:32.480737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:30.543 [2024-04-18 09:19:32.480812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:30.543 [2024-04-18 09:19:32.480890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.543 [2024-04-18 09:19:32.480944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:30.544 [2024-04-18 09:19:32.481023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:30.544 [2024-04-18 09:19:32.481063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:30.544 [2024-04-18 09:19:32.481130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.544 [2024-04-18 09:19:32.605731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:30.544 [2024-04-18 09:19:32.605974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:30.544 [2024-04-18 09:19:32.606154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:30.544 [2024-04-18 09:19:32.606201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.802 [2024-04-18 09:19:32.655941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:30.802 [2024-04-18 09:19:32.656219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:30.802 [2024-04-18 09:19:32.656399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:30.802 [2024-04-18 09:19:32.656484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.802 [2024-04-18 09:19:32.656582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:30.802 [2024-04-18 09:19:32.656656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:30.802 [2024-04-18 09:19:32.656694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:30.802 [2024-04-18 09:19:32.656756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.802 [2024-04-18 09:19:32.656841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:30.802 [2024-04-18 09:19:32.656955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:30.802 [2024-04-18 09:19:32.657017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:30.802 [2024-04-18 09:19:32.657048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.802 [2024-04-18 09:19:32.657182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:30.802 [2024-04-18 09:19:32.657269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:30.802 [2024-04-18 09:19:32.657305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:30.802 [2024-04-18 09:19:32.657338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.802 [2024-04-18 09:19:32.657418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:30.802 [2024-04-18 09:19:32.657461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:50:30.802 [2024-04-18 09:19:32.657491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:30.802 [2024-04-18 09:19:32.657581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.802 [2024-04-18 09:19:32.657671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:30.802 [2024-04-18 09:19:32.657703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:30.802 [2024-04-18 09:19:32.657732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:30.802 [2024-04-18 09:19:32.657762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.802 [2024-04-18 09:19:32.657830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:30.802 [2024-04-18 09:19:32.657913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:30.802 [2024-04-18 09:19:32.657955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:30.802 [2024-04-18 09:19:32.657984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.802 [2024-04-18 09:19:32.658121] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 649.599 ms, result 0 00:50:32.179 00:50:32.179 00:50:32.179 09:19:34 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:50:34.103 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:50:34.103 09:19:36 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:50:34.361 [2024-04-18 09:19:36.272176] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:50:34.361 [2024-04-18 09:19:36.272585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83064 ] 00:50:34.361 [2024-04-18 09:19:36.453926] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:34.619 [2024-04-18 09:19:36.712358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:35.186 [2024-04-18 09:19:37.177269] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:35.186 [2024-04-18 09:19:37.177563] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:35.446 [2024-04-18 09:19:37.340265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.446 [2024-04-18 09:19:37.340553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:50:35.446 [2024-04-18 09:19:37.340660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:50:35.446 [2024-04-18 09:19:37.340705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.446 [2024-04-18 09:19:37.340886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.446 [2024-04-18 09:19:37.340933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:35.446 [2024-04-18 09:19:37.340970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:50:35.446 [2024-04-18 09:19:37.341005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.446 [2024-04-18 09:19:37.341176] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:50:35.446 [2024-04-18 09:19:37.342741] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:50:35.446 [2024-04-18 09:19:37.342904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.446 [2024-04-18 09:19:37.343032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:35.446 [2024-04-18 09:19:37.343077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.735 ms 00:50:35.446 [2024-04-18 09:19:37.343158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.446 [2024-04-18 09:19:37.344799] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:50:35.446 [2024-04-18 09:19:37.368998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.446 [2024-04-18 09:19:37.369208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:50:35.446 [2024-04-18 09:19:37.369314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.199 ms 00:50:35.446 [2024-04-18 09:19:37.369355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.446 [2024-04-18 09:19:37.369559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.446 [2024-04-18 09:19:37.369610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:50:35.446 [2024-04-18 09:19:37.369789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:50:35.446 [2024-04-18 09:19:37.369826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.446 [2024-04-18 09:19:37.377180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.446 [2024-04-18 09:19:37.377384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:35.446 [2024-04-18 09:19:37.377494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.232 ms 00:50:35.446 [2024-04-18 09:19:37.377536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.446 [2024-04-18 09:19:37.377676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.446 [2024-04-18 09:19:37.377720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:35.446 [2024-04-18 09:19:37.377755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:50:35.446 [2024-04-18 09:19:37.377846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.446 [2024-04-18 09:19:37.377934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.446 [2024-04-18 09:19:37.378040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:50:35.446 [2024-04-18 09:19:37.378110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:50:35.446 [2024-04-18 09:19:37.378144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.446 [2024-04-18 09:19:37.378198] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:50:35.446 [2024-04-18 09:19:37.384849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.446 [2024-04-18 09:19:37.384994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:35.446 [2024-04-18 09:19:37.385079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.659 ms 00:50:35.446 [2024-04-18 09:19:37.385119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.446 [2024-04-18 09:19:37.385186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.446 [2024-04-18 09:19:37.385254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:50:35.446 [2024-04-18 09:19:37.385301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:50:35.446 [2024-04-18 09:19:37.385334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.446 [2024-04-18 09:19:37.385441] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:50:35.446 [2024-04-18 09:19:37.385501] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:50:35.446 [2024-04-18 09:19:37.385665] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:50:35.446 [2024-04-18 09:19:37.385723] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:50:35.446 [2024-04-18 09:19:37.385834] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:50:35.446 [2024-04-18 09:19:37.385954] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:50:35.446 [2024-04-18 09:19:37.386012] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:50:35.446 [2024-04-18 09:19:37.386066] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:50:35.446 [2024-04-18 09:19:37.386166] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:50:35.446 [2024-04-18 09:19:37.386235] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:50:35.446 [2024-04-18 09:19:37.386264] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:50:35.446 [2024-04-18 09:19:37.386293] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:50:35.446 [2024-04-18 09:19:37.386381] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:50:35.446 [2024-04-18 09:19:37.386441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.446 [2024-04-18 09:19:37.386474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:50:35.446 [2024-04-18 09:19:37.386506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.002 ms 00:50:35.446 [2024-04-18 09:19:37.386538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.446 [2024-04-18 09:19:37.386685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.446 [2024-04-18 09:19:37.386728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:50:35.446 [2024-04-18 09:19:37.386765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:50:35.446 [2024-04-18 09:19:37.386797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.446 [2024-04-18 09:19:37.386940] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:50:35.446 [2024-04-18 09:19:37.386977] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:50:35.446 [2024-04-18 09:19:37.387009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:35.446 [2024-04-18 09:19:37.387084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:35.446 [2024-04-18 09:19:37.387122] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:50:35.446 [2024-04-18 09:19:37.387196] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:50:35.446 [2024-04-18 09:19:37.387232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:50:35.446 [2024-04-18 09:19:37.387296] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:50:35.446 [2024-04-18 09:19:37.387332] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:50:35.446 [2024-04-18 09:19:37.387363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:35.446 [2024-04-18 09:19:37.387415] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:50:35.446 [2024-04-18 09:19:37.387487] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:50:35.446 [2024-04-18 09:19:37.387535] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:35.446 [2024-04-18 09:19:37.387566] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:50:35.446 [2024-04-18 09:19:37.387597] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:50:35.446 [2024-04-18 09:19:37.387663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:35.446 [2024-04-18 09:19:37.387737] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:50:35.446 [2024-04-18 09:19:37.387773] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:50:35.446 [2024-04-18 09:19:37.387837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:35.446 [2024-04-18 09:19:37.387873] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:50:35.446 [2024-04-18 09:19:37.387904] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:50:35.446 [2024-04-18 09:19:37.387974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:50:35.446 [2024-04-18 09:19:37.388039] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:50:35.446 [2024-04-18 09:19:37.388074] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:50:35.446 [2024-04-18 09:19:37.388108] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:50:35.446 [2024-04-18 09:19:37.388177] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:50:35.446 [2024-04-18 09:19:37.388249] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:50:35.447 [2024-04-18 09:19:37.388343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:50:35.447 [2024-04-18 09:19:37.388394] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:50:35.447 [2024-04-18 09:19:37.388430] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:50:35.447 [2024-04-18 09:19:37.388463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:50:35.447 [2024-04-18 09:19:37.388521] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:50:35.447 [2024-04-18 09:19:37.388599] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:50:35.447 [2024-04-18 09:19:37.388672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:50:35.447 [2024-04-18 09:19:37.388749] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:50:35.447 [2024-04-18 09:19:37.388782] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:50:35.447 [2024-04-18 09:19:37.388815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:35.447 [2024-04-18 09:19:37.388848] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:50:35.447 [2024-04-18 09:19:37.388881] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:50:35.447 [2024-04-18 09:19:37.388913] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:35.447 [2024-04-18 09:19:37.389001] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:50:35.447 [2024-04-18 09:19:37.389042] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:50:35.447 [2024-04-18 09:19:37.389083] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:35.447 [2024-04-18 09:19:37.389133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:35.447 [2024-04-18 09:19:37.389165] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:50:35.447 [2024-04-18 09:19:37.389197] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:50:35.447 [2024-04-18 09:19:37.389299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:50:35.447 [2024-04-18 09:19:37.389332] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:50:35.447 [2024-04-18 09:19:37.389363] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:50:35.447 [2024-04-18 09:19:37.389415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:50:35.447 [2024-04-18 09:19:37.389491] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:50:35.447 [2024-04-18 09:19:37.389589] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:35.447 [2024-04-18 09:19:37.389689] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:50:35.447 [2024-04-18 09:19:37.389746] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:50:35.447 [2024-04-18 09:19:37.389797] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:50:35.447 [2024-04-18 09:19:37.389892] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:50:35.447 [2024-04-18 09:19:37.389945] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:50:35.447 [2024-04-18 09:19:37.389996] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:50:35.447 [2024-04-18 09:19:37.390091] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:50:35.447 [2024-04-18 09:19:37.390232] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:50:35.447 [2024-04-18 09:19:37.390285] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:50:35.447 [2024-04-18 09:19:37.390335] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:50:35.447 [2024-04-18 09:19:37.390397] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:50:35.447 [2024-04-18 09:19:37.390500] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:50:35.447 [2024-04-18 09:19:37.390554] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:50:35.447 [2024-04-18 09:19:37.390605] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:50:35.447 [2024-04-18 09:19:37.390700] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:35.447 [2024-04-18 09:19:37.390758] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:50:35.447 [2024-04-18 09:19:37.390809] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:50:35.447 [2024-04-18 09:19:37.390860] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:50:35.447 [2024-04-18 09:19:37.390957] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:50:35.447 [2024-04-18 09:19:37.391012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.447 [2024-04-18 09:19:37.391044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:50:35.447 [2024-04-18 09:19:37.391077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.110 ms 00:50:35.447 [2024-04-18 09:19:37.391108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.447 [2024-04-18 09:19:37.418355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.447 [2024-04-18 09:19:37.418549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:35.447 [2024-04-18 09:19:37.418672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.098 ms 00:50:35.447 [2024-04-18 09:19:37.418711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.447 [2024-04-18 09:19:37.418828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.447 [2024-04-18 09:19:37.418968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:50:35.447 [2024-04-18 09:19:37.419010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:50:35.447 [2024-04-18 09:19:37.419042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.447 [2024-04-18 09:19:37.490952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.447 [2024-04-18 09:19:37.491202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:35.447 [2024-04-18 09:19:37.491368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.815 ms 00:50:35.447 [2024-04-18 09:19:37.491432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.447 [2024-04-18 09:19:37.491522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.447 [2024-04-18 09:19:37.491648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:35.447 [2024-04-18 09:19:37.491736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:50:35.447 [2024-04-18 09:19:37.491771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.447 [2024-04-18 09:19:37.492341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.447 [2024-04-18 09:19:37.492477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:35.447 [2024-04-18 09:19:37.492560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:50:35.447 [2024-04-18 09:19:37.492599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.447 [2024-04-18 09:19:37.492782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.447 [2024-04-18 09:19:37.492827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:35.447 [2024-04-18 09:19:37.492899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:50:35.447 [2024-04-18 09:19:37.492939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.447 [2024-04-18 09:19:37.519772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.447 [2024-04-18 09:19:37.519992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:35.447 [2024-04-18 09:19:37.520273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.777 ms 00:50:35.447 [2024-04-18 09:19:37.520315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.447 [2024-04-18 09:19:37.544423] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:50:35.447 [2024-04-18 09:19:37.544694] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:50:35.447 [2024-04-18 09:19:37.544812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.447 [2024-04-18 09:19:37.544853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:50:35.447 [2024-04-18 09:19:37.544893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.301 ms 00:50:35.447 [2024-04-18 09:19:37.544928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.706 [2024-04-18 09:19:37.583492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.706 [2024-04-18 09:19:37.583744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:50:35.706 [2024-04-18 09:19:37.583867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.417 ms 00:50:35.706 [2024-04-18 09:19:37.583912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.706 [2024-04-18 09:19:37.606925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.706 [2024-04-18 09:19:37.607151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:50:35.706 [2024-04-18 09:19:37.607248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.906 ms 00:50:35.706 [2024-04-18 09:19:37.607304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.706 [2024-04-18 09:19:37.628071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.706 [2024-04-18 09:19:37.628270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:50:35.706 [2024-04-18 09:19:37.628356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.646 ms 00:50:35.706 [2024-04-18 09:19:37.628414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.706 [2024-04-18 09:19:37.629014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.706 [2024-04-18 09:19:37.629149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:50:35.706 [2024-04-18 09:19:37.629228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:50:35.706 [2024-04-18 09:19:37.629267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.706 [2024-04-18 09:19:37.729907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.706 [2024-04-18 09:19:37.730158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:50:35.706 [2024-04-18 09:19:37.730323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.587 ms 00:50:35.706 [2024-04-18 09:19:37.730365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.706 [2024-04-18 09:19:37.744040] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:50:35.706 [2024-04-18 09:19:37.747505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.706 [2024-04-18 09:19:37.747645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:50:35.706 [2024-04-18 09:19:37.747785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.043 ms 00:50:35.706 [2024-04-18 09:19:37.747825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.706 [2024-04-18 09:19:37.747955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.706 [2024-04-18 09:19:37.748142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:50:35.706 [2024-04-18 09:19:37.748221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:50:35.706 [2024-04-18 09:19:37.748255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.706 [2024-04-18 09:19:37.749203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.706 [2024-04-18 09:19:37.749318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:50:35.706 [2024-04-18 09:19:37.749408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.872 ms 00:50:35.706 [2024-04-18 09:19:37.749447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.706 [2024-04-18 09:19:37.751677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.706 [2024-04-18 09:19:37.751776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:50:35.706 [2024-04-18 09:19:37.751865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.171 ms 00:50:35.706 [2024-04-18 09:19:37.751901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.706 [2024-04-18 09:19:37.751960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.706 [2024-04-18 09:19:37.752004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:50:35.706 [2024-04-18 09:19:37.752039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:50:35.706 [2024-04-18 09:19:37.752088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.706 [2024-04-18 09:19:37.752276] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:50:35.706 [2024-04-18 09:19:37.752316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.706 [2024-04-18 09:19:37.752350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:50:35.706 [2024-04-18 09:19:37.752385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:50:35.706 [2024-04-18 09:19:37.752439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.706 [2024-04-18 09:19:37.794004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.706 [2024-04-18 09:19:37.794251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:50:35.706 [2024-04-18 09:19:37.794341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.512 ms 00:50:35.706 [2024-04-18 09:19:37.794396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.706 [2024-04-18 09:19:37.794514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:35.706 [2024-04-18 09:19:37.794563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:50:35.706 [2024-04-18 09:19:37.794672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:50:35.706 [2024-04-18 09:19:37.794723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:35.706 [2024-04-18 09:19:37.796010] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 455.277 ms, result 0 00:51:07.294  Copying: 34/1024 [MB] (34 MBps) Copying: 68/1024 [MB] (34 MBps) Copying: 100/1024 [MB] (32 MBps) Copying: 133/1024 [MB] (32 MBps) Copying: 166/1024 [MB] (32 MBps) Copying: 199/1024 [MB] (32 MBps) Copying: 232/1024 [MB] (33 MBps) Copying: 265/1024 [MB] (32 MBps) Copying: 298/1024 [MB] (32 MBps) Copying: 331/1024 [MB] (32 MBps) Copying: 364/1024 [MB] (33 MBps) Copying: 398/1024 [MB] (33 MBps) Copying: 431/1024 [MB] (32 MBps) Copying: 464/1024 [MB] (33 MBps) Copying: 498/1024 [MB] (33 MBps) Copying: 531/1024 [MB] (33 MBps) Copying: 566/1024 [MB] (34 MBps) Copying: 599/1024 [MB] (33 MBps) Copying: 628/1024 [MB] (28 MBps) Copying: 658/1024 [MB] (30 MBps) Copying: 692/1024 [MB] (33 MBps) Copying: 724/1024 [MB] (31 MBps) Copying: 758/1024 [MB] (34 MBps) Copying: 792/1024 [MB] (34 MBps) Copying: 823/1024 [MB] (31 MBps) Copying: 855/1024 [MB] (31 MBps) Copying: 888/1024 [MB] (32 MBps) Copying: 924/1024 [MB] (36 MBps) Copying: 956/1024 [MB] (32 MBps) Copying: 989/1024 [MB] (33 MBps) Copying: 1022/1024 [MB] (32 MBps) Copying: 1024/1024 [MB] (average 32 MBps)[2024-04-18 09:20:09.135657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:07.294 [2024-04-18 09:20:09.135922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:51:07.294 [2024-04-18 09:20:09.136052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:51:07.294 [2024-04-18 09:20:09.136106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.294 [2024-04-18 09:20:09.136178] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:51:07.294 [2024-04-18 09:20:09.141124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:07.294 [2024-04-18 09:20:09.141294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:51:07.294 [2024-04-18 09:20:09.141443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.868 ms 00:51:07.294 [2024-04-18 09:20:09.141544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.294 [2024-04-18 09:20:09.141827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:07.294 [2024-04-18 09:20:09.141891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:51:07.294 [2024-04-18 09:20:09.142443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:51:07.295 [2024-04-18 09:20:09.142489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.295 [2024-04-18 09:20:09.146811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:07.295 [2024-04-18 09:20:09.146963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:51:07.295 [2024-04-18 09:20:09.147053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.225 ms 00:51:07.295 [2024-04-18 09:20:09.147094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.295 [2024-04-18 09:20:09.153335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:07.295 [2024-04-18 09:20:09.153508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:51:07.295 [2024-04-18 09:20:09.153603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.139 ms 00:51:07.295 [2024-04-18 09:20:09.153676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.295 [2024-04-18 09:20:09.202259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:07.295 [2024-04-18 09:20:09.202521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:51:07.295 [2024-04-18 09:20:09.202613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.623 ms 00:51:07.295 [2024-04-18 09:20:09.202657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.295 [2024-04-18 09:20:09.229894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:07.295 [2024-04-18 09:20:09.230130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:51:07.295 [2024-04-18 09:20:09.230239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.161 ms 00:51:07.295 [2024-04-18 09:20:09.230279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.295 [2024-04-18 09:20:09.233972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:07.295 [2024-04-18 09:20:09.234134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:51:07.295 [2024-04-18 09:20:09.234212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.614 ms 00:51:07.295 [2024-04-18 09:20:09.234251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.295 [2024-04-18 09:20:09.282638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:07.295 [2024-04-18 09:20:09.282832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:51:07.295 [2024-04-18 09:20:09.282923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.289 ms 00:51:07.295 [2024-04-18 09:20:09.282964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.295 [2024-04-18 09:20:09.331541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:07.295 [2024-04-18 09:20:09.331765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:51:07.295 [2024-04-18 09:20:09.331943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.457 ms 00:51:07.295 [2024-04-18 09:20:09.331985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.295 [2024-04-18 09:20:09.376293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:07.295 [2024-04-18 09:20:09.376567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:51:07.295 [2024-04-18 09:20:09.376654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.173 ms 00:51:07.295 [2024-04-18 09:20:09.376696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.555 [2024-04-18 09:20:09.419939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:07.555 [2024-04-18 09:20:09.420240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:51:07.555 [2024-04-18 09:20:09.420425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.104 ms 00:51:07.555 [2024-04-18 09:20:09.420470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.555 [2024-04-18 09:20:09.420539] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:51:07.555 [2024-04-18 09:20:09.420667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:51:07.555 [2024-04-18 09:20:09.420736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:51:07.555 [2024-04-18 09:20:09.420792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.420847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.420902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.420960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.421155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.421211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.421266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.421321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.421454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.421519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.421575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.421630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.421747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.421798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.421850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.421949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.422067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.422153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.422204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.422256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.422308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:51:07.555 [2024-04-18 09:20:09.422476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.422528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.422580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.422632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.422683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.422844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.422895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.422947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.422999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.423106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.423160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.423211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.423305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.423362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.423428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.423481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.423578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.423674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.423778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.423838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.423936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.424009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.424068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.424159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.424217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.424318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.424386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.424477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.424599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.424694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.424750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.424911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.425007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.425062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.425117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.425172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.425279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.425339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.425409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.425466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.425521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.425642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.425696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.425751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.425805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.425917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.425974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.426029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.426084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.426191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.426248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.426303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.426408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.426469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.426525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.426580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.426680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.426737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.426792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.426846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.426963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.427017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.427072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.427193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.427343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.427422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.427479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.427583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.427643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.427698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.427752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.427851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.427909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.427964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.428149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.428210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.428266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:51:07.556 [2024-04-18 09:20:09.428389] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:51:07.556 [2024-04-18 09:20:09.428470] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73c17e2f-1251-4840-aae8-19f53c13030e 00:51:07.556 [2024-04-18 09:20:09.428580] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:51:07.556 [2024-04-18 09:20:09.428617] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:51:07.556 [2024-04-18 09:20:09.428663] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:51:07.556 [2024-04-18 09:20:09.428730] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:51:07.556 [2024-04-18 09:20:09.428812] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:51:07.556 [2024-04-18 09:20:09.428852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:51:07.556 [2024-04-18 09:20:09.428887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:51:07.556 [2024-04-18 09:20:09.428955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:51:07.556 [2024-04-18 09:20:09.428994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:51:07.556 [2024-04-18 09:20:09.429030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:07.556 [2024-04-18 09:20:09.429113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:51:07.556 [2024-04-18 09:20:09.429155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.492 ms 00:51:07.556 [2024-04-18 09:20:09.429189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.556 [2024-04-18 09:20:09.452177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:07.557 [2024-04-18 09:20:09.452464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:51:07.557 [2024-04-18 09:20:09.452624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.852 ms 00:51:07.557 [2024-04-18 09:20:09.452667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.557 [2024-04-18 09:20:09.453008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:07.557 [2024-04-18 09:20:09.453047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:51:07.557 [2024-04-18 09:20:09.453127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:51:07.557 [2024-04-18 09:20:09.453179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.557 [2024-04-18 09:20:09.512462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:07.557 [2024-04-18 09:20:09.512719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:51:07.557 [2024-04-18 09:20:09.512851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:07.557 [2024-04-18 09:20:09.512892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.557 [2024-04-18 09:20:09.512996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:07.557 [2024-04-18 09:20:09.513132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:51:07.557 [2024-04-18 09:20:09.513193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:07.557 [2024-04-18 09:20:09.513225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.557 [2024-04-18 09:20:09.513338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:07.557 [2024-04-18 09:20:09.513393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:51:07.557 [2024-04-18 09:20:09.513500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:07.557 [2024-04-18 09:20:09.513539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.557 [2024-04-18 09:20:09.513587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:07.557 [2024-04-18 09:20:09.513621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:51:07.557 [2024-04-18 09:20:09.513653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:07.557 [2024-04-18 09:20:09.513732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.557 [2024-04-18 09:20:09.639739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:07.557 [2024-04-18 09:20:09.639988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:51:07.557 [2024-04-18 09:20:09.640195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:07.557 [2024-04-18 09:20:09.640238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.816 [2024-04-18 09:20:09.692040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:07.816 [2024-04-18 09:20:09.692305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:51:07.816 [2024-04-18 09:20:09.692423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:07.816 [2024-04-18 09:20:09.692468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.816 [2024-04-18 09:20:09.692574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:07.816 [2024-04-18 09:20:09.692624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:51:07.816 [2024-04-18 09:20:09.692719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:07.816 [2024-04-18 09:20:09.692760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.816 [2024-04-18 09:20:09.692837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:07.816 [2024-04-18 09:20:09.692875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:51:07.816 [2024-04-18 09:20:09.692910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:07.816 [2024-04-18 09:20:09.693039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.816 [2024-04-18 09:20:09.693202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:07.816 [2024-04-18 09:20:09.693242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:51:07.816 [2024-04-18 09:20:09.693348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:07.816 [2024-04-18 09:20:09.693380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.816 [2024-04-18 09:20:09.693524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:07.816 [2024-04-18 09:20:09.693565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:51:07.816 [2024-04-18 09:20:09.693598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:07.816 [2024-04-18 09:20:09.693690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.816 [2024-04-18 09:20:09.693785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:07.816 [2024-04-18 09:20:09.693819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:51:07.816 [2024-04-18 09:20:09.693857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:07.816 [2024-04-18 09:20:09.693889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.816 [2024-04-18 09:20:09.693953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:07.816 [2024-04-18 09:20:09.694028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:51:07.816 [2024-04-18 09:20:09.694090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:07.816 [2024-04-18 09:20:09.694122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:07.816 [2024-04-18 09:20:09.694269] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 558.578 ms, result 0 00:51:09.191 00:51:09.191 00:51:09.191 09:20:11 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:51:11.103 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:51:11.103 09:20:13 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:51:11.103 09:20:13 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:51:11.103 09:20:13 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:51:11.103 09:20:13 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:51:11.361 09:20:13 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:51:11.361 09:20:13 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:51:11.361 09:20:13 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:51:11.361 09:20:13 -- ftl/dirty_shutdown.sh@37 -- # killprocess 81436 00:51:11.361 09:20:13 -- common/autotest_common.sh@936 -- # '[' -z 81436 ']' 00:51:11.361 09:20:13 -- common/autotest_common.sh@940 -- # kill -0 81436 00:51:11.361 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (81436) - No such process 00:51:11.361 09:20:13 -- common/autotest_common.sh@963 -- # echo 'Process with pid 81436 is not found' 00:51:11.361 Process with pid 81436 is not found 00:51:11.361 09:20:13 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:51:11.619 09:20:13 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:51:11.619 09:20:13 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:51:11.619 Remove shared memory files 00:51:11.619 09:20:13 -- ftl/common.sh@205 -- # rm -f rm -f 00:51:11.619 09:20:13 -- ftl/common.sh@206 -- # rm -f rm -f 00:51:11.619 09:20:13 -- ftl/common.sh@207 -- # rm -f rm -f 00:51:11.619 09:20:13 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:51:11.619 09:20:13 -- ftl/common.sh@209 -- # rm -f rm -f 00:51:11.619 ************************************ 00:51:11.619 END TEST ftl_dirty_shutdown 00:51:11.619 ************************************ 00:51:11.619 00:51:11.619 real 3m14.163s 00:51:11.619 user 3m39.897s 00:51:11.619 sys 0m36.988s 00:51:11.619 09:20:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:51:11.619 09:20:13 -- common/autotest_common.sh@10 -- # set +x 00:51:11.619 09:20:13 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:51:11.619 09:20:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:51:11.619 09:20:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:51:11.619 09:20:13 -- common/autotest_common.sh@10 -- # set +x 00:51:11.878 ************************************ 00:51:11.878 START TEST ftl_upgrade_shutdown 00:51:11.878 ************************************ 00:51:11.878 09:20:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:51:11.878 * Looking for test storage... 00:51:11.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:51:11.878 09:20:13 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:51:11.878 09:20:13 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:51:11.878 09:20:13 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:51:11.878 09:20:13 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:51:11.878 09:20:13 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:51:11.878 09:20:13 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:51:11.878 09:20:13 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:51:11.878 09:20:13 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:51:11.878 09:20:13 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:51:11.878 09:20:13 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:51:11.878 09:20:13 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:51:11.878 09:20:13 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:51:11.878 09:20:13 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:51:11.878 09:20:13 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:51:11.878 09:20:13 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:51:11.878 09:20:13 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:51:11.878 09:20:13 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:51:11.878 09:20:13 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:51:11.878 09:20:13 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:51:11.878 09:20:13 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:51:11.878 09:20:13 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:51:11.878 09:20:13 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:51:11.878 09:20:13 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:51:11.878 09:20:13 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:51:11.878 09:20:13 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:51:11.878 09:20:13 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:51:11.878 09:20:13 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:51:11.878 09:20:13 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:51:11.878 09:20:13 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:51:11.878 09:20:13 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:51:11.878 09:20:13 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:51:11.878 09:20:13 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:51:11.878 09:20:13 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:51:11.878 09:20:13 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:51:11.878 09:20:13 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:51:11.878 09:20:13 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:51:11.878 09:20:13 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:51:11.878 09:20:13 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:51:11.878 09:20:13 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:51:11.878 09:20:13 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:51:11.878 09:20:13 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:51:11.878 09:20:13 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:51:11.878 09:20:13 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:51:11.878 09:20:13 -- ftl/common.sh@81 -- # local base_bdev= 00:51:11.878 09:20:13 -- ftl/common.sh@82 -- # local cache_bdev= 00:51:11.878 09:20:13 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:51:11.878 09:20:13 -- ftl/common.sh@89 -- # spdk_tgt_pid=83507 00:51:11.878 09:20:13 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:51:11.878 09:20:13 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:51:11.878 09:20:13 -- ftl/common.sh@91 -- # waitforlisten 83507 00:51:11.878 09:20:13 -- common/autotest_common.sh@817 -- # '[' -z 83507 ']' 00:51:11.878 09:20:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:11.878 09:20:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:51:11.879 09:20:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:11.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:11.879 09:20:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:51:11.879 09:20:13 -- common/autotest_common.sh@10 -- # set +x 00:51:12.136 [2024-04-18 09:20:14.062096] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:51:12.136 [2024-04-18 09:20:14.062576] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83507 ] 00:51:12.394 [2024-04-18 09:20:14.250881] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:12.652 [2024-04-18 09:20:14.581759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:51:13.586 09:20:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:51:13.586 09:20:15 -- common/autotest_common.sh@850 -- # return 0 00:51:13.586 09:20:15 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:51:13.586 09:20:15 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:51:13.586 09:20:15 -- ftl/common.sh@99 -- # local params 00:51:13.586 09:20:15 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:51:13.586 09:20:15 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:51:13.586 09:20:15 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:51:13.586 09:20:15 -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:51:13.586 09:20:15 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:51:13.586 09:20:15 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:51:13.586 09:20:15 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:51:13.586 09:20:15 -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:51:13.586 09:20:15 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:51:13.586 09:20:15 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:51:13.586 09:20:15 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:51:13.586 09:20:15 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:51:13.586 09:20:15 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:51:13.586 09:20:15 -- ftl/common.sh@54 -- # local name=base 00:51:13.586 09:20:15 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:51:13.586 09:20:15 -- ftl/common.sh@56 -- # local size=20480 00:51:13.586 09:20:15 -- ftl/common.sh@59 -- # local base_bdev 00:51:13.586 09:20:15 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:51:13.845 09:20:15 -- ftl/common.sh@60 -- # base_bdev=basen1 00:51:13.845 09:20:15 -- ftl/common.sh@62 -- # local base_size 00:51:13.845 09:20:15 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:51:13.845 09:20:15 -- common/autotest_common.sh@1364 -- # local bdev_name=basen1 00:51:13.845 09:20:15 -- common/autotest_common.sh@1365 -- # local bdev_info 00:51:13.845 09:20:15 -- common/autotest_common.sh@1366 -- # local bs 00:51:13.845 09:20:15 -- common/autotest_common.sh@1367 -- # local nb 00:51:13.845 09:20:15 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:51:14.104 09:20:16 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:51:14.104 { 00:51:14.104 "name": "basen1", 00:51:14.104 "aliases": [ 00:51:14.104 "f18f63ae-e0f1-41a3-be22-6e58614e18db" 00:51:14.104 ], 00:51:14.104 "product_name": "NVMe disk", 00:51:14.104 "block_size": 4096, 00:51:14.104 "num_blocks": 1310720, 00:51:14.104 "uuid": "f18f63ae-e0f1-41a3-be22-6e58614e18db", 00:51:14.104 "assigned_rate_limits": { 00:51:14.104 "rw_ios_per_sec": 0, 00:51:14.104 "rw_mbytes_per_sec": 0, 00:51:14.104 "r_mbytes_per_sec": 0, 00:51:14.104 "w_mbytes_per_sec": 0 00:51:14.104 }, 00:51:14.104 "claimed": true, 00:51:14.104 "claim_type": "read_many_write_one", 00:51:14.104 "zoned": false, 00:51:14.104 "supported_io_types": { 00:51:14.104 "read": true, 00:51:14.104 "write": true, 00:51:14.104 "unmap": true, 00:51:14.104 "write_zeroes": true, 00:51:14.104 "flush": true, 00:51:14.104 "reset": true, 00:51:14.104 "compare": true, 00:51:14.104 "compare_and_write": false, 00:51:14.104 "abort": true, 00:51:14.104 "nvme_admin": true, 00:51:14.104 "nvme_io": true 00:51:14.104 }, 00:51:14.104 "driver_specific": { 00:51:14.104 "nvme": [ 00:51:14.104 { 00:51:14.104 "pci_address": "0000:00:11.0", 00:51:14.104 "trid": { 00:51:14.104 "trtype": "PCIe", 00:51:14.104 "traddr": "0000:00:11.0" 00:51:14.104 }, 00:51:14.104 "ctrlr_data": { 00:51:14.104 "cntlid": 0, 00:51:14.104 "vendor_id": "0x1b36", 00:51:14.104 "model_number": "QEMU NVMe Ctrl", 00:51:14.104 "serial_number": "12341", 00:51:14.104 "firmware_revision": "8.0.0", 00:51:14.104 "subnqn": "nqn.2019-08.org.qemu:12341", 00:51:14.104 "oacs": { 00:51:14.104 "security": 0, 00:51:14.104 "format": 1, 00:51:14.104 "firmware": 0, 00:51:14.104 "ns_manage": 1 00:51:14.104 }, 00:51:14.104 "multi_ctrlr": false, 00:51:14.104 "ana_reporting": false 00:51:14.104 }, 00:51:14.104 "vs": { 00:51:14.104 "nvme_version": "1.4" 00:51:14.104 }, 00:51:14.104 "ns_data": { 00:51:14.104 "id": 1, 00:51:14.104 "can_share": false 00:51:14.104 } 00:51:14.104 } 00:51:14.104 ], 00:51:14.104 "mp_policy": "active_passive" 00:51:14.104 } 00:51:14.104 } 00:51:14.104 ]' 00:51:14.104 09:20:16 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:51:14.104 09:20:16 -- common/autotest_common.sh@1369 -- # bs=4096 00:51:14.104 09:20:16 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:51:14.104 09:20:16 -- common/autotest_common.sh@1370 -- # nb=1310720 00:51:14.104 09:20:16 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:51:14.104 09:20:16 -- common/autotest_common.sh@1374 -- # echo 5120 00:51:14.104 09:20:16 -- ftl/common.sh@63 -- # base_size=5120 00:51:14.104 09:20:16 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:51:14.104 09:20:16 -- ftl/common.sh@67 -- # clear_lvols 00:51:14.104 09:20:16 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:51:14.104 09:20:16 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:51:14.362 09:20:16 -- ftl/common.sh@28 -- # stores=eada9c4a-bc5c-4bdf-bbe5-6b53d4049260 00:51:14.362 09:20:16 -- ftl/common.sh@29 -- # for lvs in $stores 00:51:14.362 09:20:16 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eada9c4a-bc5c-4bdf-bbe5-6b53d4049260 00:51:14.621 09:20:16 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:51:14.881 09:20:16 -- ftl/common.sh@68 -- # lvs=d0c92970-d141-4ee0-af97-7b861b97e9fa 00:51:14.881 09:20:16 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u d0c92970-d141-4ee0-af97-7b861b97e9fa 00:51:15.140 09:20:17 -- ftl/common.sh@107 -- # base_bdev=718099e0-d630-425d-9e75-1659697576b5 00:51:15.140 09:20:17 -- ftl/common.sh@108 -- # [[ -z 718099e0-d630-425d-9e75-1659697576b5 ]] 00:51:15.140 09:20:17 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 718099e0-d630-425d-9e75-1659697576b5 5120 00:51:15.140 09:20:17 -- ftl/common.sh@35 -- # local name=cache 00:51:15.140 09:20:17 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:51:15.140 09:20:17 -- ftl/common.sh@37 -- # local base_bdev=718099e0-d630-425d-9e75-1659697576b5 00:51:15.140 09:20:17 -- ftl/common.sh@38 -- # local cache_size=5120 00:51:15.140 09:20:17 -- ftl/common.sh@41 -- # get_bdev_size 718099e0-d630-425d-9e75-1659697576b5 00:51:15.140 09:20:17 -- common/autotest_common.sh@1364 -- # local bdev_name=718099e0-d630-425d-9e75-1659697576b5 00:51:15.140 09:20:17 -- common/autotest_common.sh@1365 -- # local bdev_info 00:51:15.140 09:20:17 -- common/autotest_common.sh@1366 -- # local bs 00:51:15.140 09:20:17 -- common/autotest_common.sh@1367 -- # local nb 00:51:15.140 09:20:17 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 718099e0-d630-425d-9e75-1659697576b5 00:51:15.140 09:20:17 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:51:15.140 { 00:51:15.140 "name": "718099e0-d630-425d-9e75-1659697576b5", 00:51:15.140 "aliases": [ 00:51:15.140 "lvs/basen1p0" 00:51:15.140 ], 00:51:15.140 "product_name": "Logical Volume", 00:51:15.140 "block_size": 4096, 00:51:15.140 "num_blocks": 5242880, 00:51:15.140 "uuid": "718099e0-d630-425d-9e75-1659697576b5", 00:51:15.140 "assigned_rate_limits": { 00:51:15.140 "rw_ios_per_sec": 0, 00:51:15.140 "rw_mbytes_per_sec": 0, 00:51:15.140 "r_mbytes_per_sec": 0, 00:51:15.140 "w_mbytes_per_sec": 0 00:51:15.140 }, 00:51:15.140 "claimed": false, 00:51:15.140 "zoned": false, 00:51:15.140 "supported_io_types": { 00:51:15.140 "read": true, 00:51:15.140 "write": true, 00:51:15.140 "unmap": true, 00:51:15.140 "write_zeroes": true, 00:51:15.140 "flush": false, 00:51:15.140 "reset": true, 00:51:15.140 "compare": false, 00:51:15.140 "compare_and_write": false, 00:51:15.140 "abort": false, 00:51:15.140 "nvme_admin": false, 00:51:15.140 "nvme_io": false 00:51:15.140 }, 00:51:15.140 "driver_specific": { 00:51:15.140 "lvol": { 00:51:15.140 "lvol_store_uuid": "d0c92970-d141-4ee0-af97-7b861b97e9fa", 00:51:15.140 "base_bdev": "basen1", 00:51:15.140 "thin_provision": true, 00:51:15.140 "snapshot": false, 00:51:15.140 "clone": false, 00:51:15.140 "esnap_clone": false 00:51:15.141 } 00:51:15.141 } 00:51:15.141 } 00:51:15.141 ]' 00:51:15.141 09:20:17 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:51:15.400 09:20:17 -- common/autotest_common.sh@1369 -- # bs=4096 00:51:15.400 09:20:17 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:51:15.400 09:20:17 -- common/autotest_common.sh@1370 -- # nb=5242880 00:51:15.400 09:20:17 -- common/autotest_common.sh@1373 -- # bdev_size=20480 00:51:15.400 09:20:17 -- common/autotest_common.sh@1374 -- # echo 20480 00:51:15.400 09:20:17 -- ftl/common.sh@41 -- # local base_size=1024 00:51:15.400 09:20:17 -- ftl/common.sh@44 -- # local nvc_bdev 00:51:15.400 09:20:17 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:51:15.659 09:20:17 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:51:15.659 09:20:17 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:51:15.659 09:20:17 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:51:15.917 09:20:17 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:51:15.917 09:20:17 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:51:15.917 09:20:17 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 718099e0-d630-425d-9e75-1659697576b5 -c cachen1p0 --l2p_dram_limit 2 00:51:16.177 [2024-04-18 09:20:18.268651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.178 [2024-04-18 09:20:18.268939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:51:16.178 [2024-04-18 09:20:18.269121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:51:16.178 [2024-04-18 09:20:18.269252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.178 [2024-04-18 09:20:18.269442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.178 [2024-04-18 09:20:18.269505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:51:16.178 [2024-04-18 09:20:18.269650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.097 ms 00:51:16.178 [2024-04-18 09:20:18.269788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.178 [2024-04-18 09:20:18.269973] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:51:16.178 [2024-04-18 09:20:18.271260] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:51:16.178 [2024-04-18 09:20:18.271448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.178 [2024-04-18 09:20:18.271534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:51:16.178 [2024-04-18 09:20:18.271578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.500 ms 00:51:16.178 [2024-04-18 09:20:18.271638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.178 [2024-04-18 09:20:18.271793] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 9a5d0302-b8f4-49f8-be6e-63a4edcb4df5 00:51:16.178 [2024-04-18 09:20:18.273449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.178 [2024-04-18 09:20:18.273590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:51:16.178 [2024-04-18 09:20:18.273686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:51:16.178 [2024-04-18 09:20:18.273729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.438 [2024-04-18 09:20:18.281608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.438 [2024-04-18 09:20:18.281828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:51:16.438 [2024-04-18 09:20:18.281913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.726 ms 00:51:16.438 [2024-04-18 09:20:18.281956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.438 [2024-04-18 09:20:18.282093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.438 [2024-04-18 09:20:18.282134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:51:16.438 [2024-04-18 09:20:18.282169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:51:16.438 [2024-04-18 09:20:18.282231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.438 [2024-04-18 09:20:18.282327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.438 [2024-04-18 09:20:18.282406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:51:16.438 [2024-04-18 09:20:18.282490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:51:16.438 [2024-04-18 09:20:18.282599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.438 [2024-04-18 09:20:18.282704] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:51:16.438 [2024-04-18 09:20:18.289291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.438 [2024-04-18 09:20:18.289438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:51:16.438 [2024-04-18 09:20:18.289542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.593 ms 00:51:16.438 [2024-04-18 09:20:18.289610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.438 [2024-04-18 09:20:18.289673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.438 [2024-04-18 09:20:18.289760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:51:16.438 [2024-04-18 09:20:18.289800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:51:16.438 [2024-04-18 09:20:18.289830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.438 [2024-04-18 09:20:18.289928] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:51:16.438 [2024-04-18 09:20:18.290067] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:51:16.438 [2024-04-18 09:20:18.290218] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:51:16.438 [2024-04-18 09:20:18.290308] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:51:16.438 [2024-04-18 09:20:18.290461] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:51:16.438 [2024-04-18 09:20:18.290598] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:51:16.438 [2024-04-18 09:20:18.290680] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:51:16.438 [2024-04-18 09:20:18.290710] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:51:16.438 [2024-04-18 09:20:18.290742] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:51:16.438 [2024-04-18 09:20:18.290772] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:51:16.438 [2024-04-18 09:20:18.290861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.438 [2024-04-18 09:20:18.290895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:51:16.438 [2024-04-18 09:20:18.290928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.932 ms 00:51:16.438 [2024-04-18 09:20:18.290973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.438 [2024-04-18 09:20:18.291071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.438 [2024-04-18 09:20:18.291152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:51:16.438 [2024-04-18 09:20:18.291194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:51:16.438 [2024-04-18 09:20:18.291262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.438 [2024-04-18 09:20:18.291362] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:51:16.438 [2024-04-18 09:20:18.291412] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:51:16.438 [2024-04-18 09:20:18.291447] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:51:16.438 [2024-04-18 09:20:18.291477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:16.438 [2024-04-18 09:20:18.291559] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:51:16.438 [2024-04-18 09:20:18.291593] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:51:16.438 [2024-04-18 09:20:18.291625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:51:16.438 [2024-04-18 09:20:18.291667] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:51:16.438 [2024-04-18 09:20:18.291729] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:51:16.438 [2024-04-18 09:20:18.291762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:16.438 [2024-04-18 09:20:18.291831] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:51:16.438 [2024-04-18 09:20:18.291864] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:51:16.438 [2024-04-18 09:20:18.291896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:16.438 [2024-04-18 09:20:18.291952] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:51:16.438 [2024-04-18 09:20:18.291985] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:51:16.438 [2024-04-18 09:20:18.292062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:16.438 [2024-04-18 09:20:18.292104] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:51:16.438 [2024-04-18 09:20:18.292212] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:51:16.438 [2024-04-18 09:20:18.292252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:16.438 [2024-04-18 09:20:18.292285] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:51:16.438 [2024-04-18 09:20:18.292343] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:51:16.438 [2024-04-18 09:20:18.292401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:51:16.438 [2024-04-18 09:20:18.292440] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:51:16.438 [2024-04-18 09:20:18.292501] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:51:16.438 [2024-04-18 09:20:18.292539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:51:16.438 [2024-04-18 09:20:18.292571] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:51:16.438 [2024-04-18 09:20:18.292605] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:51:16.438 [2024-04-18 09:20:18.292702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:51:16.438 [2024-04-18 09:20:18.292743] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:51:16.438 [2024-04-18 09:20:18.292775] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:51:16.438 [2024-04-18 09:20:18.292840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:51:16.438 [2024-04-18 09:20:18.292872] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:51:16.438 [2024-04-18 09:20:18.292906] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:51:16.438 [2024-04-18 09:20:18.292937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:51:16.438 [2024-04-18 09:20:18.292999] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:51:16.438 [2024-04-18 09:20:18.293031] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:51:16.438 [2024-04-18 09:20:18.293067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:16.438 [2024-04-18 09:20:18.293099] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:51:16.438 [2024-04-18 09:20:18.293158] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:51:16.438 [2024-04-18 09:20:18.293190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:16.438 [2024-04-18 09:20:18.293237] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:51:16.438 [2024-04-18 09:20:18.293267] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:51:16.438 [2024-04-18 09:20:18.293349] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:51:16.438 [2024-04-18 09:20:18.293404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:16.438 [2024-04-18 09:20:18.293485] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:51:16.438 [2024-04-18 09:20:18.293520] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:51:16.438 [2024-04-18 09:20:18.293552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:51:16.438 [2024-04-18 09:20:18.293596] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:51:16.438 [2024-04-18 09:20:18.293661] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:51:16.438 [2024-04-18 09:20:18.293695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:51:16.438 [2024-04-18 09:20:18.293766] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:51:16.438 [2024-04-18 09:20:18.293820] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:51:16.438 [2024-04-18 09:20:18.293922] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:51:16.439 [2024-04-18 09:20:18.294014] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:51:16.439 [2024-04-18 09:20:18.294069] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:51:16.439 [2024-04-18 09:20:18.294196] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:51:16.439 [2024-04-18 09:20:18.294293] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:51:16.439 [2024-04-18 09:20:18.294439] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:51:16.439 [2024-04-18 09:20:18.294533] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:51:16.439 [2024-04-18 09:20:18.294611] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:51:16.439 [2024-04-18 09:20:18.294666] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:51:16.439 [2024-04-18 09:20:18.294751] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:51:16.439 [2024-04-18 09:20:18.294805] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:51:16.439 [2024-04-18 09:20:18.294852] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:51:16.439 [2024-04-18 09:20:18.294921] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:51:16.439 [2024-04-18 09:20:18.294968] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:51:16.439 [2024-04-18 09:20:18.295060] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:51:16.439 [2024-04-18 09:20:18.295112] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:51:16.439 [2024-04-18 09:20:18.295241] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:51:16.439 [2024-04-18 09:20:18.295294] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:51:16.439 [2024-04-18 09:20:18.295425] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:51:16.439 [2024-04-18 09:20:18.295483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.439 [2024-04-18 09:20:18.295562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:51:16.439 [2024-04-18 09:20:18.295601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.160 ms 00:51:16.439 [2024-04-18 09:20:18.295636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.439 [2024-04-18 09:20:18.323676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.439 [2024-04-18 09:20:18.323932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:51:16.439 [2024-04-18 09:20:18.324076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 27.918 ms 00:51:16.439 [2024-04-18 09:20:18.324133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.439 [2024-04-18 09:20:18.324258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.439 [2024-04-18 09:20:18.324354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:51:16.439 [2024-04-18 09:20:18.324417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:51:16.439 [2024-04-18 09:20:18.324483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.439 [2024-04-18 09:20:18.386090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.439 [2024-04-18 09:20:18.386320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:51:16.439 [2024-04-18 09:20:18.386490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 61.440 ms 00:51:16.439 [2024-04-18 09:20:18.386569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.439 [2024-04-18 09:20:18.386653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.439 [2024-04-18 09:20:18.386724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:51:16.439 [2024-04-18 09:20:18.386780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:51:16.439 [2024-04-18 09:20:18.386902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.439 [2024-04-18 09:20:18.387487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.439 [2024-04-18 09:20:18.387626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:51:16.439 [2024-04-18 09:20:18.387718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.473 ms 00:51:16.439 [2024-04-18 09:20:18.387763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.439 [2024-04-18 09:20:18.387876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.439 [2024-04-18 09:20:18.387964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:51:16.439 [2024-04-18 09:20:18.388084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:51:16.439 [2024-04-18 09:20:18.388151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.439 [2024-04-18 09:20:18.414850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.439 [2024-04-18 09:20:18.415070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:51:16.439 [2024-04-18 09:20:18.415158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.633 ms 00:51:16.439 [2024-04-18 09:20:18.415205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.439 [2024-04-18 09:20:18.432050] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:51:16.439 [2024-04-18 09:20:18.433428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.439 [2024-04-18 09:20:18.433592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:51:16.439 [2024-04-18 09:20:18.433721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.987 ms 00:51:16.439 [2024-04-18 09:20:18.433758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.439 [2024-04-18 09:20:18.469252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:16.439 [2024-04-18 09:20:18.469509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:51:16.439 [2024-04-18 09:20:18.469637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 35.380 ms 00:51:16.439 [2024-04-18 09:20:18.469683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:16.439 [2024-04-18 09:20:18.469812] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:51:16.439 [2024-04-18 09:20:18.469951] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:51:18.970 [2024-04-18 09:20:21.062114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:18.971 [2024-04-18 09:20:21.062382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:51:18.971 [2024-04-18 09:20:21.062493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2592.287 ms 00:51:18.971 [2024-04-18 09:20:21.062538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:18.971 [2024-04-18 09:20:21.062691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:18.971 [2024-04-18 09:20:21.062843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:51:18.971 [2024-04-18 09:20:21.062918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:51:18.971 [2024-04-18 09:20:21.062951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:19.230 [2024-04-18 09:20:21.105626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:19.230 [2024-04-18 09:20:21.105862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:51:19.230 [2024-04-18 09:20:21.106009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 42.573 ms 00:51:19.230 [2024-04-18 09:20:21.106047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:19.230 [2024-04-18 09:20:21.148633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:19.230 [2024-04-18 09:20:21.148899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:51:19.230 [2024-04-18 09:20:21.148998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 42.494 ms 00:51:19.230 [2024-04-18 09:20:21.149038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:19.230 [2024-04-18 09:20:21.149596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:19.230 [2024-04-18 09:20:21.149725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:51:19.230 [2024-04-18 09:20:21.149826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.468 ms 00:51:19.230 [2024-04-18 09:20:21.149872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:19.230 [2024-04-18 09:20:21.253548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:19.230 [2024-04-18 09:20:21.253796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:51:19.230 [2024-04-18 09:20:21.253894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 103.561 ms 00:51:19.230 [2024-04-18 09:20:21.253935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:19.230 [2024-04-18 09:20:21.298201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:19.230 [2024-04-18 09:20:21.298454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:51:19.230 [2024-04-18 09:20:21.298572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 44.175 ms 00:51:19.230 [2024-04-18 09:20:21.298613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:19.230 [2024-04-18 09:20:21.301120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:19.230 [2024-04-18 09:20:21.301245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:51:19.230 [2024-04-18 09:20:21.301348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.387 ms 00:51:19.230 [2024-04-18 09:20:21.301424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:19.489 [2024-04-18 09:20:21.343783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:19.489 [2024-04-18 09:20:21.344045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:51:19.489 [2024-04-18 09:20:21.344208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 42.240 ms 00:51:19.489 [2024-04-18 09:20:21.344251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:19.489 [2024-04-18 09:20:21.344347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:19.489 [2024-04-18 09:20:21.344412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:51:19.489 [2024-04-18 09:20:21.344454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:51:19.489 [2024-04-18 09:20:21.344549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:19.489 [2024-04-18 09:20:21.344721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:19.489 [2024-04-18 09:20:21.344763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:51:19.489 [2024-04-18 09:20:21.344802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:51:19.490 [2024-04-18 09:20:21.344892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:19.490 [2024-04-18 09:20:21.346165] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3076.960 ms, result 0 00:51:19.490 { 00:51:19.490 "name": "ftl", 00:51:19.490 "uuid": "9a5d0302-b8f4-49f8-be6e-63a4edcb4df5" 00:51:19.490 } 00:51:19.490 09:20:21 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:51:19.749 [2024-04-18 09:20:21.616910] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:19.749 09:20:21 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:51:19.749 09:20:21 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:51:20.008 [2024-04-18 09:20:22.093450] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:51:20.266 09:20:22 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:51:20.266 [2024-04-18 09:20:22.300904] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:51:20.266 09:20:22 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:51:20.834 Fill FTL, iteration 1 00:51:20.834 09:20:22 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:51:20.834 09:20:22 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:51:20.834 09:20:22 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:51:20.834 09:20:22 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:51:20.834 09:20:22 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:51:20.834 09:20:22 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:51:20.834 09:20:22 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:51:20.834 09:20:22 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:51:20.834 09:20:22 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:51:20.834 09:20:22 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:51:20.834 09:20:22 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:51:20.834 09:20:22 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:51:20.834 09:20:22 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:51:20.834 09:20:22 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:51:20.834 09:20:22 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:51:20.834 09:20:22 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:51:20.834 09:20:22 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:51:20.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:51:20.834 09:20:22 -- ftl/common.sh@163 -- # spdk_ini_pid=83635 00:51:20.834 09:20:22 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:51:20.834 09:20:22 -- ftl/common.sh@165 -- # waitforlisten 83635 /var/tmp/spdk.tgt.sock 00:51:20.834 09:20:22 -- common/autotest_common.sh@817 -- # '[' -z 83635 ']' 00:51:20.834 09:20:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:51:20.834 09:20:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:51:20.834 09:20:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:51:20.834 09:20:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:51:20.834 09:20:22 -- common/autotest_common.sh@10 -- # set +x 00:51:20.834 [2024-04-18 09:20:22.826980] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:51:20.834 [2024-04-18 09:20:22.827144] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83635 ] 00:51:21.092 [2024-04-18 09:20:23.009507] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:21.349 [2024-04-18 09:20:23.294684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:51:22.284 09:20:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:51:22.284 09:20:24 -- common/autotest_common.sh@850 -- # return 0 00:51:22.284 09:20:24 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:51:22.542 ftln1 00:51:22.542 09:20:24 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:51:22.542 09:20:24 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:51:22.802 09:20:24 -- ftl/common.sh@173 -- # echo ']}' 00:51:22.802 09:20:24 -- ftl/common.sh@176 -- # killprocess 83635 00:51:22.802 09:20:24 -- common/autotest_common.sh@936 -- # '[' -z 83635 ']' 00:51:22.802 09:20:24 -- common/autotest_common.sh@940 -- # kill -0 83635 00:51:22.802 09:20:24 -- common/autotest_common.sh@941 -- # uname 00:51:22.802 09:20:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:51:22.802 09:20:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83635 00:51:23.060 killing process with pid 83635 00:51:23.060 09:20:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:51:23.060 09:20:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:51:23.060 09:20:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83635' 00:51:23.060 09:20:24 -- common/autotest_common.sh@955 -- # kill 83635 00:51:23.060 09:20:24 -- common/autotest_common.sh@960 -- # wait 83635 00:51:25.589 09:20:27 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:51:25.589 09:20:27 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:51:25.846 [2024-04-18 09:20:27.710414] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:51:25.846 [2024-04-18 09:20:27.710565] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83689 ] 00:51:25.846 [2024-04-18 09:20:27.892330] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:26.104 [2024-04-18 09:20:28.145941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:51:32.475  Copying: 227/1024 [MB] (227 MBps) Copying: 453/1024 [MB] (226 MBps) Copying: 686/1024 [MB] (233 MBps) Copying: 926/1024 [MB] (240 MBps) Copying: 1024/1024 [MB] (average 231 MBps) 00:51:32.475 00:51:32.475 Calculate MD5 checksum, iteration 1 00:51:32.475 09:20:34 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:51:32.475 09:20:34 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:51:32.475 09:20:34 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:51:32.475 09:20:34 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:51:32.475 09:20:34 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:51:32.475 09:20:34 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:51:32.475 09:20:34 -- ftl/common.sh@154 -- # return 0 00:51:32.475 09:20:34 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:51:32.475 [2024-04-18 09:20:34.455772] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:51:32.475 [2024-04-18 09:20:34.456139] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83764 ] 00:51:32.734 [2024-04-18 09:20:34.618398] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:32.993 [2024-04-18 09:20:34.872784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:51:36.683  Copying: 576/1024 [MB] (576 MBps) Copying: 1024/1024 [MB] (average 597 MBps) 00:51:36.683 00:51:36.683 09:20:38 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:51:36.683 09:20:38 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:51:38.582 09:20:40 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:51:38.582 Fill FTL, iteration 2 00:51:38.582 09:20:40 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=9074f5072f9bd55fc88c0866d7016c43 00:51:38.582 09:20:40 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:51:38.582 09:20:40 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:51:38.582 09:20:40 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:51:38.582 09:20:40 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:51:38.582 09:20:40 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:51:38.582 09:20:40 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:51:38.582 09:20:40 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:51:38.582 09:20:40 -- ftl/common.sh@154 -- # return 0 00:51:38.582 09:20:40 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:51:38.582 [2024-04-18 09:20:40.636235] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:51:38.582 [2024-04-18 09:20:40.636364] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83830 ] 00:51:38.839 [2024-04-18 09:20:40.802347] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:39.166 [2024-04-18 09:20:41.059341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:51:45.717  Copying: 219/1024 [MB] (219 MBps) Copying: 453/1024 [MB] (234 MBps) Copying: 679/1024 [MB] (226 MBps) Copying: 898/1024 [MB] (219 MBps) Copying: 1024/1024 [MB] (average 224 MBps) 00:51:45.717 00:51:45.717 Calculate MD5 checksum, iteration 2 00:51:45.717 09:20:47 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:51:45.717 09:20:47 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:51:45.717 09:20:47 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:51:45.717 09:20:47 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:51:45.717 09:20:47 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:51:45.717 09:20:47 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:51:45.717 09:20:47 -- ftl/common.sh@154 -- # return 0 00:51:45.717 09:20:47 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:51:45.717 [2024-04-18 09:20:47.686330] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:51:45.717 [2024-04-18 09:20:47.686747] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83901 ] 00:51:45.975 [2024-04-18 09:20:47.877596] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:46.234 [2024-04-18 09:20:48.221121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:51:50.602  Copying: 561/1024 [MB] (561 MBps) Copying: 1024/1024 [MB] (average 584 MBps) 00:51:50.602 00:51:50.602 09:20:52 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:51:50.602 09:20:52 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:51:53.134 09:20:54 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:51:53.134 09:20:54 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=01bffbc0a0a6de22eb1cbae92e2d5ab5 00:51:53.134 09:20:54 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:51:53.134 09:20:54 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:51:53.134 09:20:54 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:51:53.134 [2024-04-18 09:20:54.868332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.134 [2024-04-18 09:20:54.868404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:51:53.134 [2024-04-18 09:20:54.868421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:51:53.134 [2024-04-18 09:20:54.868432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.134 [2024-04-18 09:20:54.868461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.134 [2024-04-18 09:20:54.868473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:51:53.134 [2024-04-18 09:20:54.868488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:51:53.134 [2024-04-18 09:20:54.868498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.134 [2024-04-18 09:20:54.868531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.134 [2024-04-18 09:20:54.868542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:51:53.134 [2024-04-18 09:20:54.868552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:51:53.134 [2024-04-18 09:20:54.868562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.134 [2024-04-18 09:20:54.868626] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.295 ms, result 0 00:51:53.134 true 00:51:53.134 09:20:54 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:51:53.134 { 00:51:53.134 "name": "ftl", 00:51:53.134 "properties": [ 00:51:53.134 { 00:51:53.134 "name": "superblock_version", 00:51:53.134 "value": 5, 00:51:53.134 "read-only": true 00:51:53.134 }, 00:51:53.134 { 00:51:53.134 "name": "base_device", 00:51:53.134 "bands": [ 00:51:53.134 { 00:51:53.134 "id": 0, 00:51:53.134 "state": "FREE", 00:51:53.134 "validity": 0.0 00:51:53.134 }, 00:51:53.134 { 00:51:53.134 "id": 1, 00:51:53.134 "state": "FREE", 00:51:53.134 "validity": 0.0 00:51:53.134 }, 00:51:53.134 { 00:51:53.134 "id": 2, 00:51:53.134 "state": "FREE", 00:51:53.134 "validity": 0.0 00:51:53.134 }, 00:51:53.134 { 00:51:53.134 "id": 3, 00:51:53.134 "state": "FREE", 00:51:53.134 "validity": 0.0 00:51:53.134 }, 00:51:53.134 { 00:51:53.134 "id": 4, 00:51:53.135 "state": "FREE", 00:51:53.135 "validity": 0.0 00:51:53.135 }, 00:51:53.135 { 00:51:53.135 "id": 5, 00:51:53.135 "state": "FREE", 00:51:53.135 "validity": 0.0 00:51:53.135 }, 00:51:53.135 { 00:51:53.135 "id": 6, 00:51:53.135 "state": "FREE", 00:51:53.135 "validity": 0.0 00:51:53.135 }, 00:51:53.135 { 00:51:53.135 "id": 7, 00:51:53.135 "state": "FREE", 00:51:53.135 "validity": 0.0 00:51:53.135 }, 00:51:53.135 { 00:51:53.135 "id": 8, 00:51:53.135 "state": "FREE", 00:51:53.135 "validity": 0.0 00:51:53.135 }, 00:51:53.135 { 00:51:53.135 "id": 9, 00:51:53.135 "state": "FREE", 00:51:53.135 "validity": 0.0 00:51:53.135 }, 00:51:53.135 { 00:51:53.135 "id": 10, 00:51:53.135 "state": "FREE", 00:51:53.135 "validity": 0.0 00:51:53.135 }, 00:51:53.135 { 00:51:53.135 "id": 11, 00:51:53.135 "state": "FREE", 00:51:53.135 "validity": 0.0 00:51:53.135 }, 00:51:53.135 { 00:51:53.135 "id": 12, 00:51:53.135 "state": "FREE", 00:51:53.135 "validity": 0.0 00:51:53.135 }, 00:51:53.135 { 00:51:53.135 "id": 13, 00:51:53.135 "state": "FREE", 00:51:53.135 "validity": 0.0 00:51:53.135 }, 00:51:53.135 { 00:51:53.135 "id": 14, 00:51:53.135 "state": "FREE", 00:51:53.135 "validity": 0.0 00:51:53.135 }, 00:51:53.135 { 00:51:53.135 "id": 15, 00:51:53.135 "state": "FREE", 00:51:53.135 "validity": 0.0 00:51:53.135 }, 00:51:53.135 { 00:51:53.135 "id": 16, 00:51:53.135 "state": "FREE", 00:51:53.135 "validity": 0.0 00:51:53.135 }, 00:51:53.135 { 00:51:53.135 "id": 17, 00:51:53.135 "state": "FREE", 00:51:53.135 "validity": 0.0 00:51:53.135 } 00:51:53.135 ], 00:51:53.135 "read-only": true 00:51:53.135 }, 00:51:53.135 { 00:51:53.135 "name": "cache_device", 00:51:53.135 "type": "bdev", 00:51:53.135 "chunks": [ 00:51:53.135 { 00:51:53.135 "id": 0, 00:51:53.135 "state": "CLOSED", 00:51:53.135 "utilization": 1.0 00:51:53.135 }, 00:51:53.135 { 00:51:53.135 "id": 1, 00:51:53.135 "state": "CLOSED", 00:51:53.135 "utilization": 1.0 00:51:53.135 }, 00:51:53.135 { 00:51:53.135 "id": 2, 00:51:53.135 "state": "OPEN", 00:51:53.135 "utilization": 0.001953125 00:51:53.135 }, 00:51:53.135 { 00:51:53.135 "id": 3, 00:51:53.135 "state": "OPEN", 00:51:53.135 "utilization": 0.0 00:51:53.135 } 00:51:53.135 ], 00:51:53.135 "read-only": true 00:51:53.135 }, 00:51:53.135 { 00:51:53.135 "name": "verbose_mode", 00:51:53.135 "value": true, 00:51:53.135 "unit": "", 00:51:53.135 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:51:53.135 }, 00:51:53.135 { 00:51:53.135 "name": "prep_upgrade_on_shutdown", 00:51:53.135 "value": false, 00:51:53.135 "unit": "", 00:51:53.135 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:51:53.135 } 00:51:53.135 ] 00:51:53.135 } 00:51:53.135 09:20:55 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:51:53.393 [2024-04-18 09:20:55.240875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.393 [2024-04-18 09:20:55.240971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:51:53.393 [2024-04-18 09:20:55.240996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:51:53.393 [2024-04-18 09:20:55.241014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.393 [2024-04-18 09:20:55.241054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.393 [2024-04-18 09:20:55.241073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:51:53.393 [2024-04-18 09:20:55.241090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:51:53.393 [2024-04-18 09:20:55.241107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.393 [2024-04-18 09:20:55.241140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.393 [2024-04-18 09:20:55.241157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:51:53.393 [2024-04-18 09:20:55.241175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:51:53.393 [2024-04-18 09:20:55.241191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.393 [2024-04-18 09:20:55.241278] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.388 ms, result 0 00:51:53.393 true 00:51:53.393 09:20:55 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:51:53.393 09:20:55 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:51:53.393 09:20:55 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:51:53.651 09:20:55 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:51:53.651 09:20:55 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:51:53.651 09:20:55 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:51:53.911 [2024-04-18 09:20:55.769371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.911 [2024-04-18 09:20:55.769449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:51:53.911 [2024-04-18 09:20:55.769465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:51:53.911 [2024-04-18 09:20:55.769475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.911 [2024-04-18 09:20:55.769522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.911 [2024-04-18 09:20:55.769534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:51:53.911 [2024-04-18 09:20:55.769545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:51:53.911 [2024-04-18 09:20:55.769555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.911 [2024-04-18 09:20:55.769578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.911 [2024-04-18 09:20:55.769589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:51:53.911 [2024-04-18 09:20:55.769600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:51:53.911 [2024-04-18 09:20:55.769610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.911 [2024-04-18 09:20:55.769673] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.292 ms, result 0 00:51:53.911 true 00:51:53.911 09:20:55 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:51:53.911 { 00:51:53.911 "name": "ftl", 00:51:53.911 "properties": [ 00:51:53.911 { 00:51:53.911 "name": "superblock_version", 00:51:53.911 "value": 5, 00:51:53.911 "read-only": true 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "name": "base_device", 00:51:53.911 "bands": [ 00:51:53.911 { 00:51:53.911 "id": 0, 00:51:53.911 "state": "FREE", 00:51:53.911 "validity": 0.0 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 1, 00:51:53.911 "state": "FREE", 00:51:53.911 "validity": 0.0 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 2, 00:51:53.911 "state": "FREE", 00:51:53.911 "validity": 0.0 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 3, 00:51:53.911 "state": "FREE", 00:51:53.911 "validity": 0.0 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 4, 00:51:53.911 "state": "FREE", 00:51:53.911 "validity": 0.0 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 5, 00:51:53.911 "state": "FREE", 00:51:53.911 "validity": 0.0 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 6, 00:51:53.911 "state": "FREE", 00:51:53.911 "validity": 0.0 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 7, 00:51:53.911 "state": "FREE", 00:51:53.911 "validity": 0.0 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 8, 00:51:53.911 "state": "FREE", 00:51:53.911 "validity": 0.0 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 9, 00:51:53.911 "state": "FREE", 00:51:53.911 "validity": 0.0 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 10, 00:51:53.911 "state": "FREE", 00:51:53.911 "validity": 0.0 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 11, 00:51:53.911 "state": "FREE", 00:51:53.911 "validity": 0.0 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 12, 00:51:53.911 "state": "FREE", 00:51:53.911 "validity": 0.0 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 13, 00:51:53.911 "state": "FREE", 00:51:53.911 "validity": 0.0 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 14, 00:51:53.911 "state": "FREE", 00:51:53.911 "validity": 0.0 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 15, 00:51:53.911 "state": "FREE", 00:51:53.911 "validity": 0.0 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 16, 00:51:53.911 "state": "FREE", 00:51:53.911 "validity": 0.0 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 17, 00:51:53.911 "state": "FREE", 00:51:53.911 "validity": 0.0 00:51:53.911 } 00:51:53.911 ], 00:51:53.911 "read-only": true 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "name": "cache_device", 00:51:53.911 "type": "bdev", 00:51:53.911 "chunks": [ 00:51:53.911 { 00:51:53.911 "id": 0, 00:51:53.911 "state": "CLOSED", 00:51:53.911 "utilization": 1.0 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 1, 00:51:53.911 "state": "CLOSED", 00:51:53.911 "utilization": 1.0 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 2, 00:51:53.911 "state": "OPEN", 00:51:53.911 "utilization": 0.001953125 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "id": 3, 00:51:53.911 "state": "OPEN", 00:51:53.911 "utilization": 0.0 00:51:53.911 } 00:51:53.911 ], 00:51:53.911 "read-only": true 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "name": "verbose_mode", 00:51:53.911 "value": true, 00:51:53.911 "unit": "", 00:51:53.911 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:51:53.911 }, 00:51:53.911 { 00:51:53.911 "name": "prep_upgrade_on_shutdown", 00:51:53.911 "value": true, 00:51:53.911 "unit": "", 00:51:53.911 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:51:53.911 } 00:51:53.911 ] 00:51:53.911 } 00:51:53.911 09:20:55 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:51:53.911 09:20:55 -- ftl/common.sh@130 -- # [[ -n 83507 ]] 00:51:53.911 09:20:55 -- ftl/common.sh@131 -- # killprocess 83507 00:51:53.911 09:20:55 -- common/autotest_common.sh@936 -- # '[' -z 83507 ']' 00:51:53.911 09:20:55 -- common/autotest_common.sh@940 -- # kill -0 83507 00:51:53.911 09:20:55 -- common/autotest_common.sh@941 -- # uname 00:51:53.911 09:20:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:51:53.911 09:20:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83507 00:51:54.171 killing process with pid 83507 00:51:54.171 09:20:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:51:54.171 09:20:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:51:54.171 09:20:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83507' 00:51:54.171 09:20:56 -- common/autotest_common.sh@955 -- # kill 83507 00:51:54.171 09:20:56 -- common/autotest_common.sh@960 -- # wait 83507 00:51:55.548 [2024-04-18 09:20:57.215363] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:51:55.548 [2024-04-18 09:20:57.245831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.548 [2024-04-18 09:20:57.246007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:51:55.548 [2024-04-18 09:20:57.246114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:51:55.548 [2024-04-18 09:20:57.246153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:55.548 [2024-04-18 09:20:57.246207] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:51:55.548 [2024-04-18 09:20:57.250423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:55.548 [2024-04-18 09:20:57.250574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:51:55.548 [2024-04-18 09:20:57.250707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.156 ms 00:51:55.548 [2024-04-18 09:20:57.250760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.671 [2024-04-18 09:21:05.005102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:03.671 [2024-04-18 09:21:05.005336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:52:03.671 [2024-04-18 09:21:05.005456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7754.246 ms 00:52:03.671 [2024-04-18 09:21:05.005498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.671 [2024-04-18 09:21:05.006560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:03.671 [2024-04-18 09:21:05.006708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:52:03.671 [2024-04-18 09:21:05.006804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.012 ms 00:52:03.671 [2024-04-18 09:21:05.006844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.671 [2024-04-18 09:21:05.007914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:03.671 [2024-04-18 09:21:05.008067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:52:03.671 [2024-04-18 09:21:05.008166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.007 ms 00:52:03.671 [2024-04-18 09:21:05.008209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.671 [2024-04-18 09:21:05.025578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:03.671 [2024-04-18 09:21:05.025714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:52:03.671 [2024-04-18 09:21:05.025848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.288 ms 00:52:03.671 [2024-04-18 09:21:05.025885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.671 [2024-04-18 09:21:05.035831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:03.671 [2024-04-18 09:21:05.035970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:52:03.671 [2024-04-18 09:21:05.036056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.887 ms 00:52:03.671 [2024-04-18 09:21:05.036093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.671 [2024-04-18 09:21:05.036212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:03.671 [2024-04-18 09:21:05.036320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:52:03.671 [2024-04-18 09:21:05.036410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:52:03.671 [2024-04-18 09:21:05.036445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.671 [2024-04-18 09:21:05.052370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:03.671 [2024-04-18 09:21:05.052524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:52:03.671 [2024-04-18 09:21:05.052665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.879 ms 00:52:03.671 [2024-04-18 09:21:05.052705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.671 [2024-04-18 09:21:05.069145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:03.671 [2024-04-18 09:21:05.069300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:52:03.671 [2024-04-18 09:21:05.069441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.380 ms 00:52:03.671 [2024-04-18 09:21:05.069481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.671 [2024-04-18 09:21:05.086094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:03.671 [2024-04-18 09:21:05.086249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:52:03.671 [2024-04-18 09:21:05.086386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.536 ms 00:52:03.671 [2024-04-18 09:21:05.086427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.671 [2024-04-18 09:21:05.102764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:03.671 [2024-04-18 09:21:05.102901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:52:03.671 [2024-04-18 09:21:05.103035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.216 ms 00:52:03.671 [2024-04-18 09:21:05.103073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.671 [2024-04-18 09:21:05.103135] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:52:03.671 [2024-04-18 09:21:05.103182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:52:03.671 [2024-04-18 09:21:05.103237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:52:03.671 [2024-04-18 09:21:05.103357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:52:03.671 [2024-04-18 09:21:05.103434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:52:03.671 [2024-04-18 09:21:05.103486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:52:03.671 [2024-04-18 09:21:05.103591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:52:03.671 [2024-04-18 09:21:05.103650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:52:03.671 [2024-04-18 09:21:05.103702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:52:03.671 [2024-04-18 09:21:05.103753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:52:03.671 [2024-04-18 09:21:05.103860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:52:03.671 [2024-04-18 09:21:05.103917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:52:03.671 [2024-04-18 09:21:05.103968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:52:03.671 [2024-04-18 09:21:05.104068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:52:03.671 [2024-04-18 09:21:05.104145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:52:03.671 [2024-04-18 09:21:05.104202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:52:03.671 [2024-04-18 09:21:05.104256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:52:03.671 [2024-04-18 09:21:05.104364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:52:03.671 [2024-04-18 09:21:05.104432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:52:03.671 [2024-04-18 09:21:05.104487] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:52:03.671 [2024-04-18 09:21:05.104519] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9a5d0302-b8f4-49f8-be6e-63a4edcb4df5 00:52:03.671 [2024-04-18 09:21:05.104617] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:52:03.671 [2024-04-18 09:21:05.104657] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:52:03.671 [2024-04-18 09:21:05.104688] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:52:03.671 [2024-04-18 09:21:05.104721] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:52:03.671 [2024-04-18 09:21:05.104752] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:52:03.671 [2024-04-18 09:21:05.104785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:52:03.671 [2024-04-18 09:21:05.104862] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:52:03.671 [2024-04-18 09:21:05.104899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:52:03.671 [2024-04-18 09:21:05.104930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:52:03.671 [2024-04-18 09:21:05.104962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:03.671 [2024-04-18 09:21:05.104995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:52:03.671 [2024-04-18 09:21:05.105034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.828 ms 00:52:03.671 [2024-04-18 09:21:05.105154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.671 [2024-04-18 09:21:05.126259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:03.671 [2024-04-18 09:21:05.126506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:52:03.672 [2024-04-18 09:21:05.126597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 21.055 ms 00:52:03.672 [2024-04-18 09:21:05.126636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.672 [2024-04-18 09:21:05.127007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:03.672 [2024-04-18 09:21:05.127112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:52:03.672 [2024-04-18 09:21:05.127196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.232 ms 00:52:03.672 [2024-04-18 09:21:05.127235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.672 [2024-04-18 09:21:05.199570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:03.672 [2024-04-18 09:21:05.199805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:52:03.672 [2024-04-18 09:21:05.199886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:03.672 [2024-04-18 09:21:05.199925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.672 [2024-04-18 09:21:05.200019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:03.672 [2024-04-18 09:21:05.200057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:52:03.672 [2024-04-18 09:21:05.200090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:03.672 [2024-04-18 09:21:05.200121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.672 [2024-04-18 09:21:05.200321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:03.672 [2024-04-18 09:21:05.200386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:52:03.672 [2024-04-18 09:21:05.200425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:03.672 [2024-04-18 09:21:05.200549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.672 [2024-04-18 09:21:05.200606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:03.672 [2024-04-18 09:21:05.200697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:52:03.672 [2024-04-18 09:21:05.200738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:03.672 [2024-04-18 09:21:05.200770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.672 [2024-04-18 09:21:05.332724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:03.672 [2024-04-18 09:21:05.333003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:52:03.672 [2024-04-18 09:21:05.333178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:03.672 [2024-04-18 09:21:05.333219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.672 [2024-04-18 09:21:05.384189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:03.672 [2024-04-18 09:21:05.384478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:52:03.672 [2024-04-18 09:21:05.384582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:03.672 [2024-04-18 09:21:05.384626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.672 [2024-04-18 09:21:05.384755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:03.672 [2024-04-18 09:21:05.384876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:52:03.672 [2024-04-18 09:21:05.384924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:03.672 [2024-04-18 09:21:05.384960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.672 [2024-04-18 09:21:05.385043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:03.672 [2024-04-18 09:21:05.385091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:52:03.672 [2024-04-18 09:21:05.385131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:03.672 [2024-04-18 09:21:05.385249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.672 [2024-04-18 09:21:05.385433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:03.672 [2024-04-18 09:21:05.385487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:52:03.672 [2024-04-18 09:21:05.385528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:03.672 [2024-04-18 09:21:05.385630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.672 [2024-04-18 09:21:05.385710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:03.672 [2024-04-18 09:21:05.385753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:52:03.672 [2024-04-18 09:21:05.385789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:03.672 [2024-04-18 09:21:05.385829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.672 [2024-04-18 09:21:05.385891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:03.672 [2024-04-18 09:21:05.385966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:52:03.672 [2024-04-18 09:21:05.386008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:03.672 [2024-04-18 09:21:05.386041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.672 [2024-04-18 09:21:05.386111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:03.672 [2024-04-18 09:21:05.386153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:52:03.672 [2024-04-18 09:21:05.386189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:03.672 [2024-04-18 09:21:05.386305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:03.672 [2024-04-18 09:21:05.386494] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8140.590 ms, result 0 00:52:08.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:08.936 09:21:10 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:52:08.936 09:21:10 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:52:08.936 09:21:10 -- ftl/common.sh@81 -- # local base_bdev= 00:52:08.936 09:21:10 -- ftl/common.sh@82 -- # local cache_bdev= 00:52:08.936 09:21:10 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:52:08.936 09:21:10 -- ftl/common.sh@89 -- # spdk_tgt_pid=84115 00:52:08.936 09:21:10 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:52:08.936 09:21:10 -- ftl/common.sh@91 -- # waitforlisten 84115 00:52:08.936 09:21:10 -- common/autotest_common.sh@817 -- # '[' -z 84115 ']' 00:52:08.936 09:21:10 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:52:08.936 09:21:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:08.936 09:21:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:52:08.936 09:21:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:08.936 09:21:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:52:08.936 09:21:10 -- common/autotest_common.sh@10 -- # set +x 00:52:08.936 [2024-04-18 09:21:10.973106] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:52:08.937 [2024-04-18 09:21:10.973416] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84115 ] 00:52:09.194 [2024-04-18 09:21:11.145575] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:09.452 [2024-04-18 09:21:11.499158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:52:10.830 [2024-04-18 09:21:12.588287] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:52:10.830 [2024-04-18 09:21:12.588662] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:52:10.830 [2024-04-18 09:21:12.734025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.830 [2024-04-18 09:21:12.734285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:52:10.830 [2024-04-18 09:21:12.734410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:52:10.830 [2024-04-18 09:21:12.734457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.830 [2024-04-18 09:21:12.734678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.830 [2024-04-18 09:21:12.734798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:52:10.830 [2024-04-18 09:21:12.734911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:52:10.830 [2024-04-18 09:21:12.734954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.830 [2024-04-18 09:21:12.735088] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:52:10.830 [2024-04-18 09:21:12.736616] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:52:10.830 [2024-04-18 09:21:12.736797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.830 [2024-04-18 09:21:12.736900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:52:10.830 [2024-04-18 09:21:12.736953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.732 ms 00:52:10.830 [2024-04-18 09:21:12.737034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.830 [2024-04-18 09:21:12.738761] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:52:10.830 [2024-04-18 09:21:12.763345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.830 [2024-04-18 09:21:12.763630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:52:10.830 [2024-04-18 09:21:12.763727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.600 ms 00:52:10.830 [2024-04-18 09:21:12.763770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.830 [2024-04-18 09:21:12.763902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.830 [2024-04-18 09:21:12.763948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:52:10.830 [2024-04-18 09:21:12.763985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:52:10.830 [2024-04-18 09:21:12.764105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.830 [2024-04-18 09:21:12.772105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.830 [2024-04-18 09:21:12.772327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:52:10.830 [2024-04-18 09:21:12.772434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.833 ms 00:52:10.830 [2024-04-18 09:21:12.772479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.830 [2024-04-18 09:21:12.772570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.830 [2024-04-18 09:21:12.772630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:52:10.830 [2024-04-18 09:21:12.772691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:52:10.830 [2024-04-18 09:21:12.772728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.830 [2024-04-18 09:21:12.772823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.830 [2024-04-18 09:21:12.772888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:52:10.830 [2024-04-18 09:21:12.772936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:52:10.830 [2024-04-18 09:21:12.772971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.830 [2024-04-18 09:21:12.773030] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:52:10.830 [2024-04-18 09:21:12.779890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.830 [2024-04-18 09:21:12.780081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:52:10.830 [2024-04-18 09:21:12.780173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.867 ms 00:52:10.830 [2024-04-18 09:21:12.780215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.830 [2024-04-18 09:21:12.780337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.830 [2024-04-18 09:21:12.780387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:52:10.830 [2024-04-18 09:21:12.780437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:52:10.830 [2024-04-18 09:21:12.780472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.830 [2024-04-18 09:21:12.780614] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:52:10.830 [2024-04-18 09:21:12.780669] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:52:10.830 [2024-04-18 09:21:12.780760] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:52:10.830 [2024-04-18 09:21:12.780859] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:52:10.830 [2024-04-18 09:21:12.780981] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:52:10.830 [2024-04-18 09:21:12.781138] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:52:10.830 [2024-04-18 09:21:12.781200] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:52:10.830 [2024-04-18 09:21:12.781283] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:52:10.830 [2024-04-18 09:21:12.781404] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:52:10.830 [2024-04-18 09:21:12.781463] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:52:10.830 [2024-04-18 09:21:12.781495] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:52:10.830 [2024-04-18 09:21:12.781643] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:52:10.830 [2024-04-18 09:21:12.781680] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:52:10.830 [2024-04-18 09:21:12.781714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.830 [2024-04-18 09:21:12.781747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:52:10.830 [2024-04-18 09:21:12.781818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.102 ms 00:52:10.830 [2024-04-18 09:21:12.781857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.830 [2024-04-18 09:21:12.781958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.830 [2024-04-18 09:21:12.781994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:52:10.830 [2024-04-18 09:21:12.782107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:52:10.830 [2024-04-18 09:21:12.782146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.830 [2024-04-18 09:21:12.782258] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:52:10.830 [2024-04-18 09:21:12.782296] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:52:10.830 [2024-04-18 09:21:12.782329] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:52:10.831 [2024-04-18 09:21:12.782441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:10.831 [2024-04-18 09:21:12.782532] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:52:10.831 [2024-04-18 09:21:12.782572] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:52:10.831 [2024-04-18 09:21:12.782640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:52:10.831 [2024-04-18 09:21:12.782711] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:52:10.831 [2024-04-18 09:21:12.782749] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:52:10.831 [2024-04-18 09:21:12.782816] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:10.831 [2024-04-18 09:21:12.782894] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:52:10.831 [2024-04-18 09:21:12.782933] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:52:10.831 [2024-04-18 09:21:12.782999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:10.831 [2024-04-18 09:21:12.783038] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:52:10.831 [2024-04-18 09:21:12.783072] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:52:10.831 [2024-04-18 09:21:12.783136] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:10.831 [2024-04-18 09:21:12.783173] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:52:10.831 [2024-04-18 09:21:12.783207] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:52:10.831 [2024-04-18 09:21:12.783252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:10.831 [2024-04-18 09:21:12.783286] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:52:10.831 [2024-04-18 09:21:12.783383] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:52:10.831 [2024-04-18 09:21:12.783427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:52:10.831 [2024-04-18 09:21:12.783462] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:52:10.831 [2024-04-18 09:21:12.783496] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:52:10.831 [2024-04-18 09:21:12.783555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:52:10.831 [2024-04-18 09:21:12.783619] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:52:10.831 [2024-04-18 09:21:12.783689] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:52:10.831 [2024-04-18 09:21:12.783729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:52:10.831 [2024-04-18 09:21:12.783763] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:52:10.831 [2024-04-18 09:21:12.783797] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:52:10.831 [2024-04-18 09:21:12.783881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:52:10.831 [2024-04-18 09:21:12.783921] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:52:10.831 [2024-04-18 09:21:12.783964] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:52:10.831 [2024-04-18 09:21:12.784007] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:52:10.831 [2024-04-18 09:21:12.784044] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:52:10.831 [2024-04-18 09:21:12.784104] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:52:10.831 [2024-04-18 09:21:12.784234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:10.831 [2024-04-18 09:21:12.784273] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:52:10.831 [2024-04-18 09:21:12.784307] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:52:10.831 [2024-04-18 09:21:12.784342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:10.831 [2024-04-18 09:21:12.784376] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:52:10.831 [2024-04-18 09:21:12.784427] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:52:10.831 [2024-04-18 09:21:12.784519] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:52:10.831 [2024-04-18 09:21:12.784564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:10.831 [2024-04-18 09:21:12.784602] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:52:10.831 [2024-04-18 09:21:12.784637] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:52:10.831 [2024-04-18 09:21:12.784672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:52:10.831 [2024-04-18 09:21:12.784767] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:52:10.831 [2024-04-18 09:21:12.784808] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:52:10.831 [2024-04-18 09:21:12.784861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:52:10.831 [2024-04-18 09:21:12.784899] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:52:10.831 [2024-04-18 09:21:12.784986] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:10.831 [2024-04-18 09:21:12.785045] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:52:10.831 [2024-04-18 09:21:12.785145] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:52:10.831 [2024-04-18 09:21:12.785314] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:52:10.831 [2024-04-18 09:21:12.785393] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:52:10.831 [2024-04-18 09:21:12.785535] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:52:10.831 [2024-04-18 09:21:12.785592] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:52:10.831 [2024-04-18 09:21:12.785649] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:52:10.831 [2024-04-18 09:21:12.785705] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:52:10.831 [2024-04-18 09:21:12.785830] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:52:10.831 [2024-04-18 09:21:12.785886] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:52:10.831 [2024-04-18 09:21:12.785943] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:52:10.831 [2024-04-18 09:21:12.786070] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:52:10.831 [2024-04-18 09:21:12.786127] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:52:10.831 [2024-04-18 09:21:12.786223] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:52:10.831 [2024-04-18 09:21:12.786284] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:10.831 [2024-04-18 09:21:12.786340] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:52:10.831 [2024-04-18 09:21:12.786462] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:52:10.831 [2024-04-18 09:21:12.786573] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:52:10.831 [2024-04-18 09:21:12.786688] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:52:10.831 [2024-04-18 09:21:12.786788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.831 [2024-04-18 09:21:12.786827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:52:10.831 [2024-04-18 09:21:12.786896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.569 ms 00:52:10.831 [2024-04-18 09:21:12.786933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.831 [2024-04-18 09:21:12.814260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.831 [2024-04-18 09:21:12.814514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:52:10.831 [2024-04-18 09:21:12.814643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 27.205 ms 00:52:10.831 [2024-04-18 09:21:12.814686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.831 [2024-04-18 09:21:12.814778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.831 [2024-04-18 09:21:12.814816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:52:10.831 [2024-04-18 09:21:12.814892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:52:10.831 [2024-04-18 09:21:12.814933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.831 [2024-04-18 09:21:12.874467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.831 [2024-04-18 09:21:12.874683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:52:10.831 [2024-04-18 09:21:12.874835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 59.348 ms 00:52:10.831 [2024-04-18 09:21:12.874877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.831 [2024-04-18 09:21:12.874976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.831 [2024-04-18 09:21:12.875083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:52:10.831 [2024-04-18 09:21:12.875120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:52:10.831 [2024-04-18 09:21:12.875155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.831 [2024-04-18 09:21:12.875703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.831 [2024-04-18 09:21:12.875826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:52:10.831 [2024-04-18 09:21:12.875908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.451 ms 00:52:10.831 [2024-04-18 09:21:12.876014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.831 [2024-04-18 09:21:12.876105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.831 [2024-04-18 09:21:12.876232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:52:10.831 [2024-04-18 09:21:12.876276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:52:10.831 [2024-04-18 09:21:12.876357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.831 [2024-04-18 09:21:12.903580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.831 [2024-04-18 09:21:12.903793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:52:10.831 [2024-04-18 09:21:12.903909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 27.110 ms 00:52:10.831 [2024-04-18 09:21:12.903973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.831 [2024-04-18 09:21:12.927145] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:52:10.832 [2024-04-18 09:21:12.927458] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:52:10.832 [2024-04-18 09:21:12.927585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.832 [2024-04-18 09:21:12.927624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:52:10.832 [2024-04-18 09:21:12.927663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.393 ms 00:52:10.832 [2024-04-18 09:21:12.927745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:11.090 [2024-04-18 09:21:12.953846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:11.090 [2024-04-18 09:21:12.954069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:52:11.090 [2024-04-18 09:21:12.954180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.000 ms 00:52:11.090 [2024-04-18 09:21:12.954224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:11.090 [2024-04-18 09:21:12.977896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:11.090 [2024-04-18 09:21:12.978098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:52:11.090 [2024-04-18 09:21:12.978204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.572 ms 00:52:11.090 [2024-04-18 09:21:12.978279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:11.090 [2024-04-18 09:21:12.998848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:11.090 [2024-04-18 09:21:12.999045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:52:11.090 [2024-04-18 09:21:12.999132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 20.485 ms 00:52:11.090 [2024-04-18 09:21:12.999170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:11.090 [2024-04-18 09:21:12.999791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:11.090 [2024-04-18 09:21:12.999926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:52:11.090 [2024-04-18 09:21:13.000022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.463 ms 00:52:11.091 [2024-04-18 09:21:13.000116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:11.091 [2024-04-18 09:21:13.101009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:11.091 [2024-04-18 09:21:13.101280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:52:11.091 [2024-04-18 09:21:13.101494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 100.830 ms 00:52:11.091 [2024-04-18 09:21:13.101538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:11.091 [2024-04-18 09:21:13.118060] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:52:11.091 [2024-04-18 09:21:13.119353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:11.091 [2024-04-18 09:21:13.119492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:52:11.091 [2024-04-18 09:21:13.119576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.727 ms 00:52:11.091 [2024-04-18 09:21:13.119614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:11.091 [2024-04-18 09:21:13.119772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:11.091 [2024-04-18 09:21:13.119846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:52:11.091 [2024-04-18 09:21:13.119910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:52:11.091 [2024-04-18 09:21:13.119945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:11.091 [2024-04-18 09:21:13.120068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:11.091 [2024-04-18 09:21:13.120114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:52:11.091 [2024-04-18 09:21:13.120150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:52:11.091 [2024-04-18 09:21:13.120186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:11.091 [2024-04-18 09:21:13.122548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:11.091 [2024-04-18 09:21:13.122679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:52:11.091 [2024-04-18 09:21:13.122755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.310 ms 00:52:11.091 [2024-04-18 09:21:13.122845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:11.091 [2024-04-18 09:21:13.122923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:11.091 [2024-04-18 09:21:13.123000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:52:11.091 [2024-04-18 09:21:13.123034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:52:11.091 [2024-04-18 09:21:13.123066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:11.091 [2024-04-18 09:21:13.123128] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:52:11.091 [2024-04-18 09:21:13.123218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:11.091 [2024-04-18 09:21:13.123256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:52:11.091 [2024-04-18 09:21:13.123289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.091 ms 00:52:11.091 [2024-04-18 09:21:13.123321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:11.091 [2024-04-18 09:21:13.169836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:11.091 [2024-04-18 09:21:13.170069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:52:11.091 [2024-04-18 09:21:13.170158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 46.408 ms 00:52:11.091 [2024-04-18 09:21:13.170198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:11.091 [2024-04-18 09:21:13.170359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:11.091 [2024-04-18 09:21:13.170488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:52:11.091 [2024-04-18 09:21:13.170563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:52:11.091 [2024-04-18 09:21:13.170608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:11.091 [2024-04-18 09:21:13.171955] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 437.462 ms, result 0 00:52:11.091 [2024-04-18 09:21:13.186587] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:11.392 [2024-04-18 09:21:13.202563] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:52:11.392 [2024-04-18 09:21:13.213820] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:52:11.651 09:21:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:52:11.651 09:21:13 -- common/autotest_common.sh@850 -- # return 0 00:52:11.651 09:21:13 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:52:11.651 09:21:13 -- ftl/common.sh@95 -- # return 0 00:52:11.651 09:21:13 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:52:11.910 [2024-04-18 09:21:13.878332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:11.910 [2024-04-18 09:21:13.878572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:52:11.910 [2024-04-18 09:21:13.878736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:52:11.910 [2024-04-18 09:21:13.878776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:11.910 [2024-04-18 09:21:13.878839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:11.910 [2024-04-18 09:21:13.878926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:52:11.910 [2024-04-18 09:21:13.878970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:52:11.910 [2024-04-18 09:21:13.879000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:11.910 [2024-04-18 09:21:13.879090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:11.910 [2024-04-18 09:21:13.879131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:52:11.910 [2024-04-18 09:21:13.879209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:52:11.910 [2024-04-18 09:21:13.879244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:11.910 [2024-04-18 09:21:13.879335] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.991 ms, result 0 00:52:11.910 true 00:52:11.910 09:21:13 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:52:12.168 { 00:52:12.168 "name": "ftl", 00:52:12.168 "properties": [ 00:52:12.168 { 00:52:12.168 "name": "superblock_version", 00:52:12.168 "value": 5, 00:52:12.168 "read-only": true 00:52:12.168 }, 00:52:12.168 { 00:52:12.168 "name": "base_device", 00:52:12.168 "bands": [ 00:52:12.168 { 00:52:12.168 "id": 0, 00:52:12.168 "state": "CLOSED", 00:52:12.169 "validity": 1.0 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 1, 00:52:12.169 "state": "CLOSED", 00:52:12.169 "validity": 1.0 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 2, 00:52:12.169 "state": "CLOSED", 00:52:12.169 "validity": 0.007843137254901933 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 3, 00:52:12.169 "state": "FREE", 00:52:12.169 "validity": 0.0 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 4, 00:52:12.169 "state": "FREE", 00:52:12.169 "validity": 0.0 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 5, 00:52:12.169 "state": "FREE", 00:52:12.169 "validity": 0.0 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 6, 00:52:12.169 "state": "FREE", 00:52:12.169 "validity": 0.0 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 7, 00:52:12.169 "state": "FREE", 00:52:12.169 "validity": 0.0 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 8, 00:52:12.169 "state": "FREE", 00:52:12.169 "validity": 0.0 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 9, 00:52:12.169 "state": "FREE", 00:52:12.169 "validity": 0.0 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 10, 00:52:12.169 "state": "FREE", 00:52:12.169 "validity": 0.0 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 11, 00:52:12.169 "state": "FREE", 00:52:12.169 "validity": 0.0 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 12, 00:52:12.169 "state": "FREE", 00:52:12.169 "validity": 0.0 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 13, 00:52:12.169 "state": "FREE", 00:52:12.169 "validity": 0.0 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 14, 00:52:12.169 "state": "FREE", 00:52:12.169 "validity": 0.0 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 15, 00:52:12.169 "state": "FREE", 00:52:12.169 "validity": 0.0 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 16, 00:52:12.169 "state": "FREE", 00:52:12.169 "validity": 0.0 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 17, 00:52:12.169 "state": "FREE", 00:52:12.169 "validity": 0.0 00:52:12.169 } 00:52:12.169 ], 00:52:12.169 "read-only": true 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "name": "cache_device", 00:52:12.169 "type": "bdev", 00:52:12.169 "chunks": [ 00:52:12.169 { 00:52:12.169 "id": 0, 00:52:12.169 "state": "OPEN", 00:52:12.169 "utilization": 0.0 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 1, 00:52:12.169 "state": "OPEN", 00:52:12.169 "utilization": 0.0 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 2, 00:52:12.169 "state": "FREE", 00:52:12.169 "utilization": 0.0 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "id": 3, 00:52:12.169 "state": "FREE", 00:52:12.169 "utilization": 0.0 00:52:12.169 } 00:52:12.169 ], 00:52:12.169 "read-only": true 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "name": "verbose_mode", 00:52:12.169 "value": true, 00:52:12.169 "unit": "", 00:52:12.169 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:52:12.169 }, 00:52:12.169 { 00:52:12.169 "name": "prep_upgrade_on_shutdown", 00:52:12.169 "value": false, 00:52:12.169 "unit": "", 00:52:12.169 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:52:12.169 } 00:52:12.169 ] 00:52:12.169 } 00:52:12.169 09:21:14 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:52:12.169 09:21:14 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:52:12.169 09:21:14 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:52:12.428 09:21:14 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:52:12.428 09:21:14 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:52:12.428 09:21:14 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:52:12.428 09:21:14 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:52:12.428 09:21:14 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:52:12.687 Validate MD5 checksum, iteration 1 00:52:12.687 09:21:14 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:52:12.687 09:21:14 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:52:12.687 09:21:14 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:52:12.687 09:21:14 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:52:12.687 09:21:14 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:52:12.687 09:21:14 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:52:12.687 09:21:14 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:52:12.687 09:21:14 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:52:12.687 09:21:14 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:52:12.687 09:21:14 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:52:12.687 09:21:14 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:52:12.687 09:21:14 -- ftl/common.sh@154 -- # return 0 00:52:12.687 09:21:14 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:52:12.687 [2024-04-18 09:21:14.684661] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:52:12.687 [2024-04-18 09:21:14.684969] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84171 ] 00:52:12.946 [2024-04-18 09:21:14.857709] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:13.206 [2024-04-18 09:21:15.185449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:52:17.375  Copying: 649/1024 [MB] (649 MBps) Copying: 1024/1024 [MB] (average 643 MBps) 00:52:17.375 00:52:17.375 09:21:19 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:52:17.375 09:21:19 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:52:19.283 09:21:21 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:52:19.283 Validate MD5 checksum, iteration 2 00:52:19.283 09:21:21 -- ftl/upgrade_shutdown.sh@103 -- # sum=9074f5072f9bd55fc88c0866d7016c43 00:52:19.283 09:21:21 -- ftl/upgrade_shutdown.sh@105 -- # [[ 9074f5072f9bd55fc88c0866d7016c43 != \9\0\7\4\f\5\0\7\2\f\9\b\d\5\5\f\c\8\8\c\0\8\6\6\d\7\0\1\6\c\4\3 ]] 00:52:19.283 09:21:21 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:52:19.283 09:21:21 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:52:19.283 09:21:21 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:52:19.283 09:21:21 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:52:19.283 09:21:21 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:52:19.283 09:21:21 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:52:19.283 09:21:21 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:52:19.283 09:21:21 -- ftl/common.sh@154 -- # return 0 00:52:19.283 09:21:21 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:52:19.542 [2024-04-18 09:21:21.427892] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:52:19.542 [2024-04-18 09:21:21.428325] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84240 ] 00:52:19.542 [2024-04-18 09:21:21.610901] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:19.801 [2024-04-18 09:21:21.879492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:52:25.210  Copying: 590/1024 [MB] (590 MBps) Copying: 1024/1024 [MB] (average 574 MBps) 00:52:25.210 00:52:25.210 09:21:27 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:52:25.210 09:21:27 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:52:27.111 09:21:29 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:52:27.111 09:21:29 -- ftl/upgrade_shutdown.sh@103 -- # sum=01bffbc0a0a6de22eb1cbae92e2d5ab5 00:52:27.111 09:21:29 -- ftl/upgrade_shutdown.sh@105 -- # [[ 01bffbc0a0a6de22eb1cbae92e2d5ab5 != \0\1\b\f\f\b\c\0\a\0\a\6\d\e\2\2\e\b\1\c\b\a\e\9\2\e\2\d\5\a\b\5 ]] 00:52:27.111 09:21:29 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:52:27.111 09:21:29 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:52:27.111 09:21:29 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:52:27.111 09:21:29 -- ftl/common.sh@137 -- # [[ -n 84115 ]] 00:52:27.111 09:21:29 -- ftl/common.sh@138 -- # kill -9 84115 00:52:27.111 09:21:29 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:52:27.111 09:21:29 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:52:27.111 09:21:29 -- ftl/common.sh@81 -- # local base_bdev= 00:52:27.111 09:21:29 -- ftl/common.sh@82 -- # local cache_bdev= 00:52:27.111 09:21:29 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:52:27.111 09:21:29 -- ftl/common.sh@89 -- # spdk_tgt_pid=84324 00:52:27.111 09:21:29 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:52:27.111 09:21:29 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:52:27.111 09:21:29 -- ftl/common.sh@91 -- # waitforlisten 84324 00:52:27.111 09:21:29 -- common/autotest_common.sh@817 -- # '[' -z 84324 ']' 00:52:27.111 09:21:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:27.111 09:21:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:52:27.111 09:21:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:27.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:27.111 09:21:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:52:27.111 09:21:29 -- common/autotest_common.sh@10 -- # set +x 00:52:27.111 [2024-04-18 09:21:29.129889] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:52:27.111 [2024-04-18 09:21:29.130205] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84324 ] 00:52:27.368 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 816: 84115 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:52:27.368 [2024-04-18 09:21:29.297031] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:27.627 [2024-04-18 09:21:29.546260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:52:28.588 [2024-04-18 09:21:30.647898] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:52:28.588 [2024-04-18 09:21:30.648171] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:52:28.848 [2024-04-18 09:21:30.793067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.848 [2024-04-18 09:21:30.793311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:52:28.848 [2024-04-18 09:21:30.793438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:52:28.848 [2024-04-18 09:21:30.793485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.849 [2024-04-18 09:21:30.793680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.849 [2024-04-18 09:21:30.793736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:52:28.849 [2024-04-18 09:21:30.793778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:52:28.849 [2024-04-18 09:21:30.793907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.849 [2024-04-18 09:21:30.793999] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:52:28.849 [2024-04-18 09:21:30.795563] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:52:28.849 [2024-04-18 09:21:30.795757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.849 [2024-04-18 09:21:30.795872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:52:28.849 [2024-04-18 09:21:30.795941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.784 ms 00:52:28.849 [2024-04-18 09:21:30.795983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.849 [2024-04-18 09:21:30.796587] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:52:28.849 [2024-04-18 09:21:30.827314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.849 [2024-04-18 09:21:30.827561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:52:28.849 [2024-04-18 09:21:30.827666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.725 ms 00:52:28.849 [2024-04-18 09:21:30.827707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.849 [2024-04-18 09:21:30.845078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.849 [2024-04-18 09:21:30.845263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:52:28.849 [2024-04-18 09:21:30.845364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:52:28.849 [2024-04-18 09:21:30.845421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.849 [2024-04-18 09:21:30.846086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.849 [2024-04-18 09:21:30.846217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:52:28.849 [2024-04-18 09:21:30.846403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.513 ms 00:52:28.849 [2024-04-18 09:21:30.846447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.849 [2024-04-18 09:21:30.846528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.849 [2024-04-18 09:21:30.846603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:52:28.849 [2024-04-18 09:21:30.846686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:52:28.849 [2024-04-18 09:21:30.846722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.849 [2024-04-18 09:21:30.846787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.849 [2024-04-18 09:21:30.846826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:52:28.849 [2024-04-18 09:21:30.846865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:52:28.849 [2024-04-18 09:21:30.846900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.849 [2024-04-18 09:21:30.847043] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:52:28.849 [2024-04-18 09:21:30.853491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.849 [2024-04-18 09:21:30.853663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:52:28.849 [2024-04-18 09:21:30.853789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.456 ms 00:52:28.849 [2024-04-18 09:21:30.853830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.849 [2024-04-18 09:21:30.853898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.849 [2024-04-18 09:21:30.853979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:52:28.849 [2024-04-18 09:21:30.854020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:52:28.849 [2024-04-18 09:21:30.854054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.849 [2024-04-18 09:21:30.854172] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:52:28.849 [2024-04-18 09:21:30.854252] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:52:28.849 [2024-04-18 09:21:30.854529] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:52:28.849 [2024-04-18 09:21:30.854620] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:52:28.849 [2024-04-18 09:21:30.854757] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:52:28.849 [2024-04-18 09:21:30.854878] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:52:28.849 [2024-04-18 09:21:30.854936] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:52:28.849 [2024-04-18 09:21:30.854995] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:52:28.849 [2024-04-18 09:21:30.855054] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:52:28.849 [2024-04-18 09:21:30.855117] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:52:28.849 [2024-04-18 09:21:30.855226] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:52:28.849 [2024-04-18 09:21:30.855270] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:52:28.849 [2024-04-18 09:21:30.855305] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:52:28.849 [2024-04-18 09:21:30.855341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.849 [2024-04-18 09:21:30.855392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:52:28.849 [2024-04-18 09:21:30.855523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.174 ms 00:52:28.849 [2024-04-18 09:21:30.855565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.849 [2024-04-18 09:21:30.855674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.849 [2024-04-18 09:21:30.855739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:52:28.849 [2024-04-18 09:21:30.855784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:52:28.849 [2024-04-18 09:21:30.855883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.849 [2024-04-18 09:21:30.856059] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:52:28.849 [2024-04-18 09:21:30.856109] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:52:28.849 [2024-04-18 09:21:30.856275] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:52:28.849 [2024-04-18 09:21:30.856331] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:28.849 [2024-04-18 09:21:30.856391] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:52:28.849 [2024-04-18 09:21:30.856466] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:52:28.849 [2024-04-18 09:21:30.856515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:52:28.849 [2024-04-18 09:21:30.856555] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:52:28.849 [2024-04-18 09:21:30.856590] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:52:28.849 [2024-04-18 09:21:30.856624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:28.849 [2024-04-18 09:21:30.856659] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:52:28.849 [2024-04-18 09:21:30.856693] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:52:28.849 [2024-04-18 09:21:30.856729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:28.849 [2024-04-18 09:21:30.856764] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:52:28.849 [2024-04-18 09:21:30.856857] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:52:28.849 [2024-04-18 09:21:30.856918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:28.849 [2024-04-18 09:21:30.856953] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:52:28.849 [2024-04-18 09:21:30.856987] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:52:28.849 [2024-04-18 09:21:30.857021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:28.849 [2024-04-18 09:21:30.857055] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:52:28.849 [2024-04-18 09:21:30.857089] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:52:28.849 [2024-04-18 09:21:30.857124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:52:28.849 [2024-04-18 09:21:30.857252] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:52:28.849 [2024-04-18 09:21:30.857295] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:52:28.849 [2024-04-18 09:21:30.857330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:52:28.849 [2024-04-18 09:21:30.857364] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:52:28.849 [2024-04-18 09:21:30.857462] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:52:28.849 [2024-04-18 09:21:30.857497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:52:28.849 [2024-04-18 09:21:30.857532] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:52:28.849 [2024-04-18 09:21:30.857655] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:52:28.849 [2024-04-18 09:21:30.857694] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:52:28.849 [2024-04-18 09:21:30.857748] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:52:28.849 [2024-04-18 09:21:30.857818] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:52:28.849 [2024-04-18 09:21:30.857879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:52:28.849 [2024-04-18 09:21:30.857912] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:52:28.849 [2024-04-18 09:21:30.857945] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:52:28.849 [2024-04-18 09:21:30.857977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:28.849 [2024-04-18 09:21:30.858010] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:52:28.849 [2024-04-18 09:21:30.858043] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:52:28.849 [2024-04-18 09:21:30.858076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:28.849 [2024-04-18 09:21:30.858184] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:52:28.849 [2024-04-18 09:21:30.858243] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:52:28.849 [2024-04-18 09:21:30.858281] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:52:28.849 [2024-04-18 09:21:30.858314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:28.849 [2024-04-18 09:21:30.858355] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:52:28.849 [2024-04-18 09:21:30.858401] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:52:28.850 [2024-04-18 09:21:30.858437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:52:28.850 [2024-04-18 09:21:30.858521] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:52:28.850 [2024-04-18 09:21:30.858661] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:52:28.850 [2024-04-18 09:21:30.858701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:52:28.850 [2024-04-18 09:21:30.858736] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:52:28.850 [2024-04-18 09:21:30.858794] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:28.850 [2024-04-18 09:21:30.858933] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:52:28.850 [2024-04-18 09:21:30.858992] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:52:28.850 [2024-04-18 09:21:30.859046] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:52:28.850 [2024-04-18 09:21:30.859099] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:52:28.850 [2024-04-18 09:21:30.859234] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:52:28.850 [2024-04-18 09:21:30.859300] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:52:28.850 [2024-04-18 09:21:30.859488] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:52:28.850 [2024-04-18 09:21:30.859568] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:52:28.850 [2024-04-18 09:21:30.859689] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:52:28.850 [2024-04-18 09:21:30.859806] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:52:28.850 [2024-04-18 09:21:30.859878] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:52:28.850 [2024-04-18 09:21:30.860027] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:52:28.850 [2024-04-18 09:21:30.860089] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:52:28.850 [2024-04-18 09:21:30.860146] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:52:28.850 [2024-04-18 09:21:30.860207] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:28.850 [2024-04-18 09:21:30.860388] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:52:28.850 [2024-04-18 09:21:30.860453] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:52:28.850 [2024-04-18 09:21:30.860563] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:52:28.850 [2024-04-18 09:21:30.860624] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:52:28.850 [2024-04-18 09:21:30.860727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.850 [2024-04-18 09:21:30.860932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:52:28.850 [2024-04-18 09:21:30.860980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.737 ms 00:52:28.850 [2024-04-18 09:21:30.861016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.850 [2024-04-18 09:21:30.887045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.850 [2024-04-18 09:21:30.887248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:52:28.850 [2024-04-18 09:21:30.887345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.886 ms 00:52:28.850 [2024-04-18 09:21:30.887418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:28.850 [2024-04-18 09:21:30.887604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:28.850 [2024-04-18 09:21:30.887666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:52:28.850 [2024-04-18 09:21:30.887784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:52:28.850 [2024-04-18 09:21:30.887873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:29.109 [2024-04-18 09:21:30.952084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:29.109 [2024-04-18 09:21:30.952287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:52:29.109 [2024-04-18 09:21:30.952410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 64.083 ms 00:52:29.109 [2024-04-18 09:21:30.952456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:29.109 [2024-04-18 09:21:30.952556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:29.109 [2024-04-18 09:21:30.952596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:52:29.109 [2024-04-18 09:21:30.952735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:52:29.109 [2024-04-18 09:21:30.952811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:29.109 [2024-04-18 09:21:30.952997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:29.109 [2024-04-18 09:21:30.953042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:52:29.109 [2024-04-18 09:21:30.953130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:52:29.109 [2024-04-18 09:21:30.953201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:29.109 [2024-04-18 09:21:30.953286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:29.109 [2024-04-18 09:21:30.953329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:52:29.109 [2024-04-18 09:21:30.953365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:52:29.109 [2024-04-18 09:21:30.953417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:29.109 [2024-04-18 09:21:30.981797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:29.109 [2024-04-18 09:21:30.982069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:52:29.109 [2024-04-18 09:21:30.982170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 28.230 ms 00:52:29.109 [2024-04-18 09:21:30.982213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:29.109 [2024-04-18 09:21:30.982447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:29.109 [2024-04-18 09:21:30.982511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:52:29.109 [2024-04-18 09:21:30.982548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:52:29.109 [2024-04-18 09:21:30.982643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:29.109 [2024-04-18 09:21:31.013258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:29.109 [2024-04-18 09:21:31.013448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:52:29.109 [2024-04-18 09:21:31.013595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.543 ms 00:52:29.109 [2024-04-18 09:21:31.013639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:29.109 [2024-04-18 09:21:31.031671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:29.109 [2024-04-18 09:21:31.031850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:52:29.109 [2024-04-18 09:21:31.031938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.553 ms 00:52:29.109 [2024-04-18 09:21:31.031987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:29.109 [2024-04-18 09:21:31.144156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:29.109 [2024-04-18 09:21:31.144473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:52:29.109 [2024-04-18 09:21:31.144612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 112.022 ms 00:52:29.109 [2024-04-18 09:21:31.144663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:29.109 [2024-04-18 09:21:31.144943] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:52:29.109 [2024-04-18 09:21:31.145145] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:52:29.109 [2024-04-18 09:21:31.145255] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:52:29.109 [2024-04-18 09:21:31.145434] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:52:29.109 [2024-04-18 09:21:31.145513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:29.109 [2024-04-18 09:21:31.145555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:52:29.109 [2024-04-18 09:21:31.145651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.636 ms 00:52:29.109 [2024-04-18 09:21:31.145705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:29.109 [2024-04-18 09:21:31.145877] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:52:29.109 [2024-04-18 09:21:31.146036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:29.109 [2024-04-18 09:21:31.146088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:52:29.109 [2024-04-18 09:21:31.146142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.159 ms 00:52:29.109 [2024-04-18 09:21:31.146187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:29.109 [2024-04-18 09:21:31.175821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:29.109 [2024-04-18 09:21:31.176049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:52:29.109 [2024-04-18 09:21:31.176195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 29.561 ms 00:52:29.109 [2024-04-18 09:21:31.176239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:29.109 [2024-04-18 09:21:31.193714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:29.110 [2024-04-18 09:21:31.193936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:52:29.110 [2024-04-18 09:21:31.194026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:52:29.110 [2024-04-18 09:21:31.194068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:29.110 [2024-04-18 09:21:31.194198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:29.110 [2024-04-18 09:21:31.194255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:52:29.110 [2024-04-18 09:21:31.194324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:52:29.110 [2024-04-18 09:21:31.194366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:29.110 [2024-04-18 09:21:31.194631] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:52:29.677 [2024-04-18 09:21:31.753796] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:52:29.677 [2024-04-18 09:21:31.754273] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:52:30.245 [2024-04-18 09:21:32.256494] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:52:30.245 [2024-04-18 09:21:32.256804] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:52:30.245 [2024-04-18 09:21:32.257010] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:52:30.245 [2024-04-18 09:21:32.257078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:30.245 [2024-04-18 09:21:32.257171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:52:30.245 [2024-04-18 09:21:32.257217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1062.633 ms 00:52:30.245 [2024-04-18 09:21:32.257254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:30.245 [2024-04-18 09:21:32.257388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:30.245 [2024-04-18 09:21:32.257488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:52:30.245 [2024-04-18 09:21:32.257580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:52:30.245 [2024-04-18 09:21:32.257622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:30.245 [2024-04-18 09:21:32.274573] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:52:30.245 [2024-04-18 09:21:32.275038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:30.245 [2024-04-18 09:21:32.275172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:52:30.245 [2024-04-18 09:21:32.275267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.310 ms 00:52:30.245 [2024-04-18 09:21:32.275360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:30.245 [2024-04-18 09:21:32.276183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:30.245 [2024-04-18 09:21:32.276324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:52:30.245 [2024-04-18 09:21:32.276432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.632 ms 00:52:30.245 [2024-04-18 09:21:32.276569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:30.245 [2024-04-18 09:21:32.278921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:30.245 [2024-04-18 09:21:32.279049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:52:30.245 [2024-04-18 09:21:32.279147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.277 ms 00:52:30.245 [2024-04-18 09:21:32.279236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:30.245 [2024-04-18 09:21:32.325644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:30.245 [2024-04-18 09:21:32.325877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:52:30.245 [2024-04-18 09:21:32.326003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 46.336 ms 00:52:30.245 [2024-04-18 09:21:32.326046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:30.245 [2024-04-18 09:21:32.326344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:30.245 [2024-04-18 09:21:32.326495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:52:30.245 [2024-04-18 09:21:32.326581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:52:30.245 [2024-04-18 09:21:32.326622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:30.245 [2024-04-18 09:21:32.329240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:30.245 [2024-04-18 09:21:32.329359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:52:30.245 [2024-04-18 09:21:32.329444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.491 ms 00:52:30.245 [2024-04-18 09:21:32.329509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:30.245 [2024-04-18 09:21:32.329580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:30.245 [2024-04-18 09:21:32.329650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:52:30.245 [2024-04-18 09:21:32.329746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:52:30.245 [2024-04-18 09:21:32.329780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:30.245 [2024-04-18 09:21:32.329874] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:52:30.245 [2024-04-18 09:21:32.329948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:30.245 [2024-04-18 09:21:32.329982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:52:30.245 [2024-04-18 09:21:32.330040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:52:30.245 [2024-04-18 09:21:32.330073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:30.245 [2024-04-18 09:21:32.330154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:30.245 [2024-04-18 09:21:32.330236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:52:30.245 [2024-04-18 09:21:32.330305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:52:30.245 [2024-04-18 09:21:32.330338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:30.245 [2024-04-18 09:21:32.331624] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1538.020 ms, result 0 00:52:30.504 [2024-04-18 09:21:32.346977] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:30.504 [2024-04-18 09:21:32.362958] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:52:30.504 [2024-04-18 09:21:32.374655] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:52:30.504 Validate MD5 checksum, iteration 1 00:52:30.504 09:21:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:52:30.504 09:21:32 -- common/autotest_common.sh@850 -- # return 0 00:52:30.504 09:21:32 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:52:30.504 09:21:32 -- ftl/common.sh@95 -- # return 0 00:52:30.505 09:21:32 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:52:30.505 09:21:32 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:52:30.505 09:21:32 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:52:30.505 09:21:32 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:52:30.505 09:21:32 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:52:30.505 09:21:32 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:52:30.505 09:21:32 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:52:30.505 09:21:32 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:52:30.505 09:21:32 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:52:30.505 09:21:32 -- ftl/common.sh@154 -- # return 0 00:52:30.505 09:21:32 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:52:30.505 [2024-04-18 09:21:32.494258] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:52:30.505 [2024-04-18 09:21:32.494611] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84363 ] 00:52:30.763 [2024-04-18 09:21:32.664281] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:31.022 [2024-04-18 09:21:32.969470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:52:37.170  Copying: 472/1024 [MB] (472 MBps) Copying: 967/1024 [MB] (495 MBps) Copying: 1024/1024 [MB] (average 489 MBps) 00:52:37.170 00:52:37.170 09:21:38 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:52:37.170 09:21:38 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:52:39.072 09:21:40 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:52:39.072 Validate MD5 checksum, iteration 2 00:52:39.072 09:21:40 -- ftl/upgrade_shutdown.sh@103 -- # sum=9074f5072f9bd55fc88c0866d7016c43 00:52:39.072 09:21:40 -- ftl/upgrade_shutdown.sh@105 -- # [[ 9074f5072f9bd55fc88c0866d7016c43 != \9\0\7\4\f\5\0\7\2\f\9\b\d\5\5\f\c\8\8\c\0\8\6\6\d\7\0\1\6\c\4\3 ]] 00:52:39.072 09:21:40 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:52:39.072 09:21:40 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:52:39.072 09:21:40 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:52:39.072 09:21:40 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:52:39.072 09:21:40 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:52:39.072 09:21:40 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:52:39.072 09:21:40 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:52:39.072 09:21:40 -- ftl/common.sh@154 -- # return 0 00:52:39.072 09:21:40 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:52:39.072 [2024-04-18 09:21:41.078436] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:52:39.072 [2024-04-18 09:21:41.079759] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84448 ] 00:52:39.330 [2024-04-18 09:21:41.251983] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:39.588 [2024-04-18 09:21:41.575709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:52:44.332  Copying: 511/1024 [MB] (511 MBps) Copying: 1009/1024 [MB] (498 MBps) Copying: 1024/1024 [MB] (average 503 MBps) 00:52:44.332 00:52:44.332 09:21:46 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:52:44.332 09:21:46 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:52:46.862 09:21:48 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:52:46.862 09:21:48 -- ftl/upgrade_shutdown.sh@103 -- # sum=01bffbc0a0a6de22eb1cbae92e2d5ab5 00:52:46.862 09:21:48 -- ftl/upgrade_shutdown.sh@105 -- # [[ 01bffbc0a0a6de22eb1cbae92e2d5ab5 != \0\1\b\f\f\b\c\0\a\0\a\6\d\e\2\2\e\b\1\c\b\a\e\9\2\e\2\d\5\a\b\5 ]] 00:52:46.862 09:21:48 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:52:46.862 09:21:48 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:52:46.862 09:21:48 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:52:46.862 09:21:48 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:52:46.862 09:21:48 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:52:46.862 09:21:48 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:52:46.862 09:21:48 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:52:46.862 09:21:48 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:52:46.862 09:21:48 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:52:46.862 09:21:48 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:52:46.862 09:21:48 -- ftl/common.sh@130 -- # [[ -n 84324 ]] 00:52:46.862 09:21:48 -- ftl/common.sh@131 -- # killprocess 84324 00:52:46.862 09:21:48 -- common/autotest_common.sh@936 -- # '[' -z 84324 ']' 00:52:46.862 09:21:48 -- common/autotest_common.sh@940 -- # kill -0 84324 00:52:46.862 09:21:48 -- common/autotest_common.sh@941 -- # uname 00:52:46.862 09:21:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:52:46.862 09:21:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84324 00:52:46.862 09:21:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:52:46.862 09:21:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:52:46.862 09:21:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84324' 00:52:46.862 killing process with pid 84324 00:52:46.862 09:21:48 -- common/autotest_common.sh@955 -- # kill 84324 00:52:46.862 09:21:48 -- common/autotest_common.sh@960 -- # wait 84324 00:52:47.798 [2024-04-18 09:21:49.796455] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:52:47.798 [2024-04-18 09:21:49.836892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.798 [2024-04-18 09:21:49.837119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:52:47.798 [2024-04-18 09:21:49.837267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:52:47.798 [2024-04-18 09:21:49.837359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.798 [2024-04-18 09:21:49.837441] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:52:47.798 [2024-04-18 09:21:49.841567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.798 [2024-04-18 09:21:49.841728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:52:47.798 [2024-04-18 09:21:49.841823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.009 ms 00:52:47.798 [2024-04-18 09:21:49.841905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.798 [2024-04-18 09:21:49.842151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.798 [2024-04-18 09:21:49.842283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:52:47.798 [2024-04-18 09:21:49.842367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:52:47.798 [2024-04-18 09:21:49.842489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.798 [2024-04-18 09:21:49.843833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.798 [2024-04-18 09:21:49.843976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:52:47.798 [2024-04-18 09:21:49.844074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.292 ms 00:52:47.798 [2024-04-18 09:21:49.844159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.798 [2024-04-18 09:21:49.845283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.798 [2024-04-18 09:21:49.845427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:52:47.798 [2024-04-18 09:21:49.845516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.056 ms 00:52:47.798 [2024-04-18 09:21:49.845553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.798 [2024-04-18 09:21:49.862742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.798 [2024-04-18 09:21:49.862879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:52:47.798 [2024-04-18 09:21:49.862971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.076 ms 00:52:47.798 [2024-04-18 09:21:49.863016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.798 [2024-04-18 09:21:49.872144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.798 [2024-04-18 09:21:49.872299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:52:47.798 [2024-04-18 09:21:49.872435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.033 ms 00:52:47.798 [2024-04-18 09:21:49.872477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.798 [2024-04-18 09:21:49.872654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.798 [2024-04-18 09:21:49.872746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:52:47.798 [2024-04-18 09:21:49.872818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:52:47.798 [2024-04-18 09:21:49.872861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:47.798 [2024-04-18 09:21:49.889295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:47.798 [2024-04-18 09:21:49.889448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:52:47.798 [2024-04-18 09:21:49.889529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.306 ms 00:52:47.798 [2024-04-18 09:21:49.889566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:48.057 [2024-04-18 09:21:49.906298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:48.057 [2024-04-18 09:21:49.906534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:52:48.057 [2024-04-18 09:21:49.906660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.606 ms 00:52:48.057 [2024-04-18 09:21:49.906699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:48.057 [2024-04-18 09:21:49.924005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:48.057 [2024-04-18 09:21:49.924221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:52:48.057 [2024-04-18 09:21:49.924308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.168 ms 00:52:48.057 [2024-04-18 09:21:49.924347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:48.057 [2024-04-18 09:21:49.942376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:48.057 [2024-04-18 09:21:49.942641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:52:48.057 [2024-04-18 09:21:49.942763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.801 ms 00:52:48.057 [2024-04-18 09:21:49.942803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:48.057 [2024-04-18 09:21:49.942932] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:52:48.057 [2024-04-18 09:21:49.943021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:52:48.057 [2024-04-18 09:21:49.943114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:52:48.057 [2024-04-18 09:21:49.943269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:52:48.057 [2024-04-18 09:21:49.943417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:52:48.057 [2024-04-18 09:21:49.943562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:52:48.057 [2024-04-18 09:21:49.943655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:52:48.057 [2024-04-18 09:21:49.943707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:52:48.057 [2024-04-18 09:21:49.943757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:52:48.057 [2024-04-18 09:21:49.943859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:52:48.057 [2024-04-18 09:21:49.943913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:52:48.057 [2024-04-18 09:21:49.943973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:52:48.057 [2024-04-18 09:21:49.944089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:52:48.057 [2024-04-18 09:21:49.944226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:52:48.057 [2024-04-18 09:21:49.944286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:52:48.057 [2024-04-18 09:21:49.944366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:52:48.057 [2024-04-18 09:21:49.944437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:52:48.057 [2024-04-18 09:21:49.944492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:52:48.057 [2024-04-18 09:21:49.944586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:52:48.057 [2024-04-18 09:21:49.944643] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:52:48.057 [2024-04-18 09:21:49.944678] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9a5d0302-b8f4-49f8-be6e-63a4edcb4df5 00:52:48.057 [2024-04-18 09:21:49.944791] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:52:48.057 [2024-04-18 09:21:49.944828] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:52:48.057 [2024-04-18 09:21:49.944862] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:52:48.057 [2024-04-18 09:21:49.944947] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:52:48.057 [2024-04-18 09:21:49.944987] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:52:48.057 [2024-04-18 09:21:49.945021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:52:48.057 [2024-04-18 09:21:49.945126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:52:48.057 [2024-04-18 09:21:49.945174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:52:48.057 [2024-04-18 09:21:49.945205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:52:48.057 [2024-04-18 09:21:49.945285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:48.057 [2024-04-18 09:21:49.945323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:52:48.057 [2024-04-18 09:21:49.945358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.352 ms 00:52:48.057 [2024-04-18 09:21:49.945428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:48.057 [2024-04-18 09:21:49.968226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:48.057 [2024-04-18 09:21:49.968448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:52:48.057 [2024-04-18 09:21:49.968599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.718 ms 00:52:48.057 [2024-04-18 09:21:49.968641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:48.057 [2024-04-18 09:21:49.969032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:48.057 [2024-04-18 09:21:49.969142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:52:48.057 [2024-04-18 09:21:49.969218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.250 ms 00:52:48.057 [2024-04-18 09:21:49.969312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:48.057 [2024-04-18 09:21:50.045270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:48.057 [2024-04-18 09:21:50.045532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:52:48.057 [2024-04-18 09:21:50.045626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:48.057 [2024-04-18 09:21:50.045665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:48.057 [2024-04-18 09:21:50.045787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:48.057 [2024-04-18 09:21:50.045822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:52:48.057 [2024-04-18 09:21:50.045855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:48.057 [2024-04-18 09:21:50.045885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:48.057 [2024-04-18 09:21:50.046062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:48.057 [2024-04-18 09:21:50.046195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:52:48.057 [2024-04-18 09:21:50.046274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:48.057 [2024-04-18 09:21:50.046358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:48.057 [2024-04-18 09:21:50.046439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:48.057 [2024-04-18 09:21:50.046525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:52:48.057 [2024-04-18 09:21:50.046566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:48.057 [2024-04-18 09:21:50.046600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:48.315 [2024-04-18 09:21:50.181214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:48.315 [2024-04-18 09:21:50.181471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:52:48.315 [2024-04-18 09:21:50.181563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:48.315 [2024-04-18 09:21:50.181651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:48.315 [2024-04-18 09:21:50.231283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:48.315 [2024-04-18 09:21:50.231515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:52:48.315 [2024-04-18 09:21:50.231695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:48.315 [2024-04-18 09:21:50.231735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:48.315 [2024-04-18 09:21:50.231889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:48.315 [2024-04-18 09:21:50.231972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:52:48.315 [2024-04-18 09:21:50.232053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:48.315 [2024-04-18 09:21:50.232206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:48.316 [2024-04-18 09:21:50.232297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:48.316 [2024-04-18 09:21:50.232413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:52:48.316 [2024-04-18 09:21:50.232454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:48.316 [2024-04-18 09:21:50.232486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:48.316 [2024-04-18 09:21:50.232688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:48.316 [2024-04-18 09:21:50.232770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:52:48.316 [2024-04-18 09:21:50.232842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:48.316 [2024-04-18 09:21:50.232912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:48.316 [2024-04-18 09:21:50.232989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:48.316 [2024-04-18 09:21:50.233085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:52:48.316 [2024-04-18 09:21:50.233128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:48.316 [2024-04-18 09:21:50.233160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:48.316 [2024-04-18 09:21:50.233235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:48.316 [2024-04-18 09:21:50.233270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:52:48.316 [2024-04-18 09:21:50.233302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:48.316 [2024-04-18 09:21:50.233334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:48.316 [2024-04-18 09:21:50.233468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:48.316 [2024-04-18 09:21:50.233552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:52:48.316 [2024-04-18 09:21:50.233680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:48.316 [2024-04-18 09:21:50.233718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:48.316 [2024-04-18 09:21:50.233918] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 396.991 ms, result 0 00:52:49.690 09:21:51 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:52:49.690 09:21:51 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:52:49.690 09:21:51 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:52:49.690 09:21:51 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:52:49.690 09:21:51 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:52:49.690 09:21:51 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:52:49.690 Remove shared memory files 00:52:49.690 09:21:51 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:52:49.690 09:21:51 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:52:49.691 09:21:51 -- ftl/common.sh@205 -- # rm -f rm -f 00:52:49.691 09:21:51 -- ftl/common.sh@206 -- # rm -f rm -f 00:52:49.691 09:21:51 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84115 00:52:49.691 09:21:51 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:52:49.691 09:21:51 -- ftl/common.sh@209 -- # rm -f rm -f 00:52:49.691 00:52:49.691 real 1m37.879s 00:52:49.691 user 2m19.062s 00:52:49.691 sys 0m23.510s 00:52:49.691 09:21:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:52:49.691 09:21:51 -- common/autotest_common.sh@10 -- # set +x 00:52:49.691 ************************************ 00:52:49.691 END TEST ftl_upgrade_shutdown 00:52:49.691 ************************************ 00:52:49.691 09:21:51 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:52:49.691 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:52:49.691 09:21:51 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:52:49.691 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:52:49.691 09:21:51 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:52:49.691 09:21:51 -- ftl/ftl.sh@14 -- # killprocess 77376 00:52:49.691 09:21:51 -- common/autotest_common.sh@936 -- # '[' -z 77376 ']' 00:52:49.691 09:21:51 -- common/autotest_common.sh@940 -- # kill -0 77376 00:52:49.691 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77376) - No such process 00:52:49.691 Process with pid 77376 is not found 00:52:49.691 09:21:51 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77376 is not found' 00:52:49.691 09:21:51 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:52:49.691 09:21:51 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84595 00:52:49.691 09:21:51 -- ftl/ftl.sh@20 -- # waitforlisten 84595 00:52:49.691 09:21:51 -- common/autotest_common.sh@817 -- # '[' -z 84595 ']' 00:52:49.691 09:21:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:49.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:49.691 09:21:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:52:49.691 09:21:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:49.691 09:21:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:52:49.691 09:21:51 -- common/autotest_common.sh@10 -- # set +x 00:52:49.691 09:21:51 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:49.949 [2024-04-18 09:21:51.842335] Starting SPDK v24.05-pre git sha1 ca13e8d81 / DPDK 23.11.0 initialization... 00:52:49.949 [2024-04-18 09:21:51.842746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84595 ] 00:52:49.949 [2024-04-18 09:21:52.019185] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:50.208 [2024-04-18 09:21:52.264981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:52:51.582 09:21:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:52:51.582 09:21:53 -- common/autotest_common.sh@850 -- # return 0 00:52:51.583 09:21:53 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:52:51.583 nvme0n1 00:52:51.583 09:21:53 -- ftl/ftl.sh@22 -- # clear_lvols 00:52:51.583 09:21:53 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:52:51.583 09:21:53 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:52:51.841 09:21:53 -- ftl/common.sh@28 -- # stores=d0c92970-d141-4ee0-af97-7b861b97e9fa 00:52:51.841 09:21:53 -- ftl/common.sh@29 -- # for lvs in $stores 00:52:51.841 09:21:53 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d0c92970-d141-4ee0-af97-7b861b97e9fa 00:52:52.099 09:21:54 -- ftl/ftl.sh@23 -- # killprocess 84595 00:52:52.099 09:21:54 -- common/autotest_common.sh@936 -- # '[' -z 84595 ']' 00:52:52.099 09:21:54 -- common/autotest_common.sh@940 -- # kill -0 84595 00:52:52.099 09:21:54 -- common/autotest_common.sh@941 -- # uname 00:52:52.099 09:21:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:52:52.099 09:21:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84595 00:52:52.099 09:21:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:52:52.099 09:21:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:52:52.099 killing process with pid 84595 00:52:52.099 09:21:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84595' 00:52:52.099 09:21:54 -- common/autotest_common.sh@955 -- # kill 84595 00:52:52.099 09:21:54 -- common/autotest_common.sh@960 -- # wait 84595 00:52:55.399 09:21:56 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:52:55.399 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:52:55.399 Waiting for block devices as requested 00:52:55.399 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:52:55.399 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:52:55.399 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:52:55.400 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:53:00.667 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:53:00.667 Remove shared memory files 00:53:00.667 09:22:02 -- ftl/ftl.sh@28 -- # remove_shm 00:53:00.667 09:22:02 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:53:00.667 09:22:02 -- ftl/common.sh@205 -- # rm -f rm -f 00:53:00.667 09:22:02 -- ftl/common.sh@206 -- # rm -f rm -f 00:53:00.667 09:22:02 -- ftl/common.sh@207 -- # rm -f rm -f 00:53:00.667 09:22:02 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:53:00.667 09:22:02 -- ftl/common.sh@209 -- # rm -f rm -f 00:53:00.667 ************************************ 00:53:00.667 END TEST ftl 00:53:00.667 ************************************ 00:53:00.667 00:53:00.667 real 10m46.112s 00:53:00.667 user 13m36.662s 00:53:00.667 sys 1m31.835s 00:53:00.667 09:22:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:53:00.667 09:22:02 -- common/autotest_common.sh@10 -- # set +x 00:53:00.667 09:22:02 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:53:00.667 09:22:02 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:53:00.667 09:22:02 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:53:00.667 09:22:02 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:53:00.667 09:22:02 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:53:00.667 09:22:02 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:53:00.667 09:22:02 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:53:00.667 09:22:02 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:53:00.667 09:22:02 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:53:00.667 09:22:02 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:53:00.667 09:22:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:53:00.667 09:22:02 -- common/autotest_common.sh@10 -- # set +x 00:53:00.667 09:22:02 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:53:00.667 09:22:02 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:53:00.667 09:22:02 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:53:00.667 09:22:02 -- common/autotest_common.sh@10 -- # set +x 00:53:02.099 INFO: APP EXITING 00:53:02.099 INFO: killing all VMs 00:53:02.099 INFO: killing vhost app 00:53:02.099 INFO: EXIT DONE 00:53:02.357 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:53:02.924 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:53:02.924 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:53:02.924 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:53:02.924 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:53:03.182 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:53:03.751 Cleaning 00:53:03.751 Removing: /var/run/dpdk/spdk0/config 00:53:03.751 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:53:03.751 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:53:03.751 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:53:03.751 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:53:03.751 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:53:03.751 Removing: /var/run/dpdk/spdk0/hugepage_info 00:53:03.751 Removing: /var/run/dpdk/spdk0 00:53:03.751 Removing: /var/run/dpdk/spdk_pid61212 00:53:03.751 Removing: /var/run/dpdk/spdk_pid61480 00:53:03.751 Removing: /var/run/dpdk/spdk_pid61744 00:53:03.751 Removing: /var/run/dpdk/spdk_pid61858 00:53:03.751 Removing: /var/run/dpdk/spdk_pid61919 00:53:03.751 Removing: /var/run/dpdk/spdk_pid62067 00:53:03.751 Removing: /var/run/dpdk/spdk_pid62096 00:53:03.751 Removing: /var/run/dpdk/spdk_pid62305 00:53:03.751 Removing: /var/run/dpdk/spdk_pid62422 00:53:03.751 Removing: /var/run/dpdk/spdk_pid62537 00:53:03.751 Removing: /var/run/dpdk/spdk_pid62667 00:53:03.751 Removing: /var/run/dpdk/spdk_pid62787 00:53:03.751 Removing: /var/run/dpdk/spdk_pid62836 00:53:03.751 Removing: /var/run/dpdk/spdk_pid62883 00:53:03.751 Removing: /var/run/dpdk/spdk_pid62962 00:53:03.751 Removing: /var/run/dpdk/spdk_pid63083 00:53:03.751 Removing: /var/run/dpdk/spdk_pid63559 00:53:03.751 Removing: /var/run/dpdk/spdk_pid63645 00:53:03.751 Removing: /var/run/dpdk/spdk_pid63728 00:53:03.751 Removing: /var/run/dpdk/spdk_pid63755 00:53:03.751 Removing: /var/run/dpdk/spdk_pid63924 00:53:03.751 Removing: /var/run/dpdk/spdk_pid63940 00:53:03.751 Removing: /var/run/dpdk/spdk_pid64114 00:53:03.751 Removing: /var/run/dpdk/spdk_pid64136 00:53:03.751 Removing: /var/run/dpdk/spdk_pid64215 00:53:03.751 Removing: /var/run/dpdk/spdk_pid64237 00:53:03.751 Removing: /var/run/dpdk/spdk_pid64312 00:53:03.751 Removing: /var/run/dpdk/spdk_pid64341 00:53:03.751 Removing: /var/run/dpdk/spdk_pid64550 00:53:03.751 Removing: /var/run/dpdk/spdk_pid64596 00:53:03.751 Removing: /var/run/dpdk/spdk_pid64683 00:53:03.751 Removing: /var/run/dpdk/spdk_pid64778 00:53:03.751 Removing: /var/run/dpdk/spdk_pid64824 00:53:03.751 Removing: /var/run/dpdk/spdk_pid64918 00:53:03.751 Removing: /var/run/dpdk/spdk_pid64976 00:53:03.751 Removing: /var/run/dpdk/spdk_pid65033 00:53:03.751 Removing: /var/run/dpdk/spdk_pid65083 00:53:03.751 Removing: /var/run/dpdk/spdk_pid65147 00:53:03.751 Removing: /var/run/dpdk/spdk_pid65203 00:53:04.010 Removing: /var/run/dpdk/spdk_pid65259 00:53:04.010 Removing: /var/run/dpdk/spdk_pid65315 00:53:04.010 Removing: /var/run/dpdk/spdk_pid65371 00:53:04.010 Removing: /var/run/dpdk/spdk_pid65427 00:53:04.010 Removing: /var/run/dpdk/spdk_pid65483 00:53:04.010 Removing: /var/run/dpdk/spdk_pid65534 00:53:04.010 Removing: /var/run/dpdk/spdk_pid65591 00:53:04.010 Removing: /var/run/dpdk/spdk_pid65647 00:53:04.010 Removing: /var/run/dpdk/spdk_pid65705 00:53:04.010 Removing: /var/run/dpdk/spdk_pid65761 00:53:04.010 Removing: /var/run/dpdk/spdk_pid65817 00:53:04.010 Removing: /var/run/dpdk/spdk_pid65881 00:53:04.010 Removing: /var/run/dpdk/spdk_pid65941 00:53:04.010 Removing: /var/run/dpdk/spdk_pid65996 00:53:04.010 Removing: /var/run/dpdk/spdk_pid66049 00:53:04.010 Removing: /var/run/dpdk/spdk_pid66141 00:53:04.010 Removing: /var/run/dpdk/spdk_pid66284 00:53:04.010 Removing: /var/run/dpdk/spdk_pid66474 00:53:04.010 Removing: /var/run/dpdk/spdk_pid66584 00:53:04.010 Removing: /var/run/dpdk/spdk_pid66637 00:53:04.010 Removing: /var/run/dpdk/spdk_pid67124 00:53:04.010 Removing: /var/run/dpdk/spdk_pid67237 00:53:04.010 Removing: /var/run/dpdk/spdk_pid67360 00:53:04.010 Removing: /var/run/dpdk/spdk_pid67424 00:53:04.010 Removing: /var/run/dpdk/spdk_pid67459 00:53:04.010 Removing: /var/run/dpdk/spdk_pid67551 00:53:04.010 Removing: /var/run/dpdk/spdk_pid68195 00:53:04.010 Removing: /var/run/dpdk/spdk_pid68253 00:53:04.010 Removing: /var/run/dpdk/spdk_pid68780 00:53:04.010 Removing: /var/run/dpdk/spdk_pid68894 00:53:04.010 Removing: /var/run/dpdk/spdk_pid69027 00:53:04.010 Removing: /var/run/dpdk/spdk_pid69095 00:53:04.010 Removing: /var/run/dpdk/spdk_pid69130 00:53:04.010 Removing: /var/run/dpdk/spdk_pid69165 00:53:04.010 Removing: /var/run/dpdk/spdk_pid71139 00:53:04.010 Removing: /var/run/dpdk/spdk_pid71297 00:53:04.010 Removing: /var/run/dpdk/spdk_pid71301 00:53:04.010 Removing: /var/run/dpdk/spdk_pid71324 00:53:04.010 Removing: /var/run/dpdk/spdk_pid71364 00:53:04.010 Removing: /var/run/dpdk/spdk_pid71368 00:53:04.010 Removing: /var/run/dpdk/spdk_pid71380 00:53:04.010 Removing: /var/run/dpdk/spdk_pid71425 00:53:04.010 Removing: /var/run/dpdk/spdk_pid71434 00:53:04.010 Removing: /var/run/dpdk/spdk_pid71446 00:53:04.010 Removing: /var/run/dpdk/spdk_pid71491 00:53:04.010 Removing: /var/run/dpdk/spdk_pid71495 00:53:04.010 Removing: /var/run/dpdk/spdk_pid71507 00:53:04.010 Removing: /var/run/dpdk/spdk_pid72903 00:53:04.010 Removing: /var/run/dpdk/spdk_pid73030 00:53:04.010 Removing: /var/run/dpdk/spdk_pid73195 00:53:04.010 Removing: /var/run/dpdk/spdk_pid73328 00:53:04.010 Removing: /var/run/dpdk/spdk_pid73455 00:53:04.010 Removing: /var/run/dpdk/spdk_pid73592 00:53:04.010 Removing: /var/run/dpdk/spdk_pid73756 00:53:04.010 Removing: /var/run/dpdk/spdk_pid73846 00:53:04.010 Removing: /var/run/dpdk/spdk_pid73992 00:53:04.010 Removing: /var/run/dpdk/spdk_pid74382 00:53:04.010 Removing: /var/run/dpdk/spdk_pid74434 00:53:04.010 Removing: /var/run/dpdk/spdk_pid74940 00:53:04.010 Removing: /var/run/dpdk/spdk_pid75134 00:53:04.010 Removing: /var/run/dpdk/spdk_pid75248 00:53:04.010 Removing: /var/run/dpdk/spdk_pid75373 00:53:04.010 Removing: /var/run/dpdk/spdk_pid75444 00:53:04.010 Removing: /var/run/dpdk/spdk_pid75479 00:53:04.010 Removing: /var/run/dpdk/spdk_pid75775 00:53:04.010 Removing: /var/run/dpdk/spdk_pid75852 00:53:04.010 Removing: /var/run/dpdk/spdk_pid75945 00:53:04.010 Removing: /var/run/dpdk/spdk_pid76373 00:53:04.010 Removing: /var/run/dpdk/spdk_pid76546 00:53:04.010 Removing: /var/run/dpdk/spdk_pid77376 00:53:04.010 Removing: /var/run/dpdk/spdk_pid77526 00:53:04.010 Removing: /var/run/dpdk/spdk_pid77757 00:53:04.010 Removing: /var/run/dpdk/spdk_pid77861 00:53:04.010 Removing: /var/run/dpdk/spdk_pid78209 00:53:04.010 Removing: /var/run/dpdk/spdk_pid78478 00:53:04.010 Removing: /var/run/dpdk/spdk_pid78849 00:53:04.010 Removing: /var/run/dpdk/spdk_pid79059 00:53:04.010 Removing: /var/run/dpdk/spdk_pid79184 00:53:04.010 Removing: /var/run/dpdk/spdk_pid79259 00:53:04.010 Removing: /var/run/dpdk/spdk_pid79386 00:53:04.010 Removing: /var/run/dpdk/spdk_pid79428 00:53:04.010 Removing: /var/run/dpdk/spdk_pid79508 00:53:04.010 Removing: /var/run/dpdk/spdk_pid79704 00:53:04.324 Removing: /var/run/dpdk/spdk_pid79957 00:53:04.324 Removing: /var/run/dpdk/spdk_pid80283 00:53:04.324 Removing: /var/run/dpdk/spdk_pid80659 00:53:04.324 Removing: /var/run/dpdk/spdk_pid81000 00:53:04.324 Removing: /var/run/dpdk/spdk_pid81436 00:53:04.324 Removing: /var/run/dpdk/spdk_pid81595 00:53:04.324 Removing: /var/run/dpdk/spdk_pid81703 00:53:04.324 Removing: /var/run/dpdk/spdk_pid82275 00:53:04.324 Removing: /var/run/dpdk/spdk_pid82356 00:53:04.324 Removing: /var/run/dpdk/spdk_pid82715 00:53:04.324 Removing: /var/run/dpdk/spdk_pid83064 00:53:04.324 Removing: /var/run/dpdk/spdk_pid83507 00:53:04.324 Removing: /var/run/dpdk/spdk_pid83635 00:53:04.324 Removing: /var/run/dpdk/spdk_pid83689 00:53:04.324 Removing: /var/run/dpdk/spdk_pid83764 00:53:04.324 Removing: /var/run/dpdk/spdk_pid83830 00:53:04.324 Removing: /var/run/dpdk/spdk_pid83901 00:53:04.324 Removing: /var/run/dpdk/spdk_pid84115 00:53:04.324 Removing: /var/run/dpdk/spdk_pid84171 00:53:04.324 Removing: /var/run/dpdk/spdk_pid84240 00:53:04.324 Removing: /var/run/dpdk/spdk_pid84324 00:53:04.324 Removing: /var/run/dpdk/spdk_pid84363 00:53:04.324 Removing: /var/run/dpdk/spdk_pid84448 00:53:04.324 Removing: /var/run/dpdk/spdk_pid84595 00:53:04.324 Clean 00:53:04.324 09:22:06 -- common/autotest_common.sh@1437 -- # return 0 00:53:04.324 09:22:06 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:53:04.324 09:22:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:53:04.324 09:22:06 -- common/autotest_common.sh@10 -- # set +x 00:53:04.324 09:22:06 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:53:04.324 09:22:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:53:04.324 09:22:06 -- common/autotest_common.sh@10 -- # set +x 00:53:04.582 09:22:06 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:53:04.582 09:22:06 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:53:04.582 09:22:06 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:53:04.582 09:22:06 -- spdk/autotest.sh@389 -- # hash lcov 00:53:04.582 09:22:06 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:53:04.582 09:22:06 -- spdk/autotest.sh@391 -- # hostname 00:53:04.582 09:22:06 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1701806725-069-updated-1701632595 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:53:04.582 geninfo: WARNING: invalid characters removed from testname! 00:53:36.689 09:22:34 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:53:36.689 09:22:38 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:53:39.215 09:22:40 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:53:41.748 09:22:43 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:53:44.291 09:22:45 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:53:46.867 09:22:48 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:53:49.396 09:22:50 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:53:49.396 09:22:50 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:49.396 09:22:50 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:53:49.396 09:22:50 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:49.396 09:22:50 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:49.396 09:22:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:49.396 09:22:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:49.396 09:22:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:49.396 09:22:50 -- paths/export.sh@5 -- $ export PATH 00:53:49.396 09:22:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:49.396 09:22:50 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:53:49.396 09:22:50 -- common/autobuild_common.sh@435 -- $ date +%s 00:53:49.397 09:22:50 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713432170.XXXXXX 00:53:49.397 09:22:51 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713432170.0yK0c2 00:53:49.397 09:22:51 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:53:49.397 09:22:51 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:53:49.397 09:22:51 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:53:49.397 09:22:51 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:53:49.397 09:22:51 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:53:49.397 09:22:51 -- common/autobuild_common.sh@451 -- $ get_config_params 00:53:49.397 09:22:51 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:53:49.397 09:22:51 -- common/autotest_common.sh@10 -- $ set +x 00:53:49.397 09:22:51 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:53:49.397 09:22:51 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:53:49.397 09:22:51 -- pm/common@17 -- $ local monitor 00:53:49.397 09:22:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:53:49.397 09:22:51 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=86185 00:53:49.397 09:22:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:53:49.397 09:22:51 -- pm/common@21 -- $ date +%s 00:53:49.397 09:22:51 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=86187 00:53:49.397 09:22:51 -- pm/common@26 -- $ sleep 1 00:53:49.397 09:22:51 -- pm/common@21 -- $ date +%s 00:53:49.397 09:22:51 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713432171 00:53:49.397 09:22:51 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713432171 00:53:49.397 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713432171_collect-vmstat.pm.log 00:53:49.397 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713432171_collect-cpu-load.pm.log 00:53:49.969 09:22:52 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:53:49.969 09:22:52 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:53:49.969 09:22:52 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:53:49.969 09:22:52 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:53:49.969 09:22:52 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:53:49.969 09:22:52 -- spdk/autopackage.sh@19 -- $ timing_finish 00:53:49.969 09:22:52 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:53:49.969 09:22:52 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:53:49.969 09:22:52 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:53:50.227 09:22:52 -- spdk/autopackage.sh@20 -- $ exit 0 00:53:50.227 09:22:52 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:53:50.227 09:22:52 -- pm/common@30 -- $ signal_monitor_resources TERM 00:53:50.227 09:22:52 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:53:50.227 09:22:52 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:53:50.227 09:22:52 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:53:50.227 09:22:52 -- pm/common@45 -- $ pid=86192 00:53:50.227 09:22:52 -- pm/common@52 -- $ sudo kill -TERM 86192 00:53:50.227 09:22:52 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:53:50.227 09:22:52 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:53:50.227 09:22:52 -- pm/common@45 -- $ pid=86194 00:53:50.227 09:22:52 -- pm/common@52 -- $ sudo kill -TERM 86194 00:53:50.227 + [[ -n 5057 ]] 00:53:50.227 + sudo kill 5057 00:53:50.234 [Pipeline] } 00:53:50.252 [Pipeline] // timeout 00:53:50.257 [Pipeline] } 00:53:50.269 [Pipeline] // stage 00:53:50.274 [Pipeline] } 00:53:50.290 [Pipeline] // catchError 00:53:50.298 [Pipeline] stage 00:53:50.300 [Pipeline] { (Stop VM) 00:53:50.314 [Pipeline] sh 00:53:50.590 + vagrant halt 00:53:53.872 ==> default: Halting domain... 00:54:00.439 [Pipeline] sh 00:54:00.714 + vagrant destroy -f 00:54:04.016 ==> default: Removing domain... 00:54:04.982 [Pipeline] sh 00:54:05.259 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:54:05.268 [Pipeline] } 00:54:05.283 [Pipeline] // stage 00:54:05.287 [Pipeline] } 00:54:05.303 [Pipeline] // dir 00:54:05.308 [Pipeline] } 00:54:05.323 [Pipeline] // wrap 00:54:05.330 [Pipeline] } 00:54:05.344 [Pipeline] // catchError 00:54:05.354 [Pipeline] stage 00:54:05.356 [Pipeline] { (Epilogue) 00:54:05.371 [Pipeline] sh 00:54:05.651 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:54:12.257 [Pipeline] catchError 00:54:12.259 [Pipeline] { 00:54:12.273 [Pipeline] sh 00:54:12.572 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:54:12.831 Artifacts sizes are good 00:54:12.839 [Pipeline] } 00:54:12.852 [Pipeline] // catchError 00:54:12.861 [Pipeline] archiveArtifacts 00:54:12.868 Archiving artifacts 00:54:13.013 [Pipeline] cleanWs 00:54:13.024 [WS-CLEANUP] Deleting project workspace... 00:54:13.024 [WS-CLEANUP] Deferred wipeout is used... 00:54:13.031 [WS-CLEANUP] done 00:54:13.032 [Pipeline] } 00:54:13.049 [Pipeline] // stage 00:54:13.056 [Pipeline] } 00:54:13.071 [Pipeline] // node 00:54:13.076 [Pipeline] End of Pipeline 00:54:13.112 Finished: SUCCESS