00:00:00.014 Started by upstream project "autotest-per-patch" build number 127087 00:00:00.014 originally caused by: 00:00:00.015 Started by user sys_sgci 00:00:00.107 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.107 The recommended git tool is: git 00:00:00.108 using credential 00000000-0000-0000-0000-000000000002 00:00:00.112 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.153 Fetching changes from the remote Git repository 00:00:00.155 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.197 Using shallow fetch with depth 1 00:00:00.197 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.197 > git --version # timeout=10 00:00:00.237 > git --version # 'git version 2.39.2' 00:00:00.237 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.264 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.265 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.455 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.466 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.479 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:06.479 > git config core.sparsecheckout # timeout=10 00:00:06.489 > git read-tree -mu HEAD # timeout=10 00:00:06.505 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:06.581 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:06.581 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:06.710 [Pipeline] Start of Pipeline 00:00:06.725 [Pipeline] library 00:00:06.727 Loading library shm_lib@master 00:00:06.727 Library shm_lib@master is cached. Copying from home. 00:00:06.744 [Pipeline] node 00:00:21.746 Still waiting to schedule task 00:00:21.746 Waiting for next available executor on ‘vagrant-vm-host’ 00:07:07.468 Running on VM-host-SM0 in /var/jenkins/workspace/nvme-vg-autotest_3 00:07:07.469 [Pipeline] { 00:07:07.476 [Pipeline] catchError 00:07:07.478 [Pipeline] { 00:07:07.489 [Pipeline] wrap 00:07:07.496 [Pipeline] { 00:07:07.503 [Pipeline] stage 00:07:07.504 [Pipeline] { (Prologue) 00:07:07.521 [Pipeline] echo 00:07:07.523 Node: VM-host-SM0 00:07:07.529 [Pipeline] cleanWs 00:07:07.537 [WS-CLEANUP] Deleting project workspace... 00:07:07.537 [WS-CLEANUP] Deferred wipeout is used... 00:07:07.543 [WS-CLEANUP] done 00:07:07.708 [Pipeline] setCustomBuildProperty 00:07:07.801 [Pipeline] httpRequest 00:07:07.822 [Pipeline] echo 00:07:07.823 Sorcerer 10.211.164.101 is alive 00:07:07.832 [Pipeline] httpRequest 00:07:07.836 HttpMethod: GET 00:07:07.836 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:07:07.837 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:07:07.838 Response Code: HTTP/1.1 200 OK 00:07:07.839 Success: Status code 200 is in the accepted range: 200,404 00:07:07.839 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:07:07.982 [Pipeline] sh 00:07:08.263 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:07:08.279 [Pipeline] httpRequest 00:07:08.298 [Pipeline] echo 00:07:08.300 Sorcerer 10.211.164.101 is alive 00:07:08.308 [Pipeline] httpRequest 00:07:08.313 HttpMethod: GET 00:07:08.313 URL: http://10.211.164.101/packages/spdk_dca21ec0f3ec663fb113b85210d70609a04f98a9.tar.gz 00:07:08.314 Sending request to url: http://10.211.164.101/packages/spdk_dca21ec0f3ec663fb113b85210d70609a04f98a9.tar.gz 00:07:08.315 Response Code: HTTP/1.1 200 OK 00:07:08.315 Success: Status code 200 is in the accepted range: 200,404 00:07:08.316 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/spdk_dca21ec0f3ec663fb113b85210d70609a04f98a9.tar.gz 00:07:10.494 [Pipeline] sh 00:07:10.861 + tar --no-same-owner -xf spdk_dca21ec0f3ec663fb113b85210d70609a04f98a9.tar.gz 00:07:14.157 [Pipeline] sh 00:07:14.437 + git -C spdk log --oneline -n5 00:07:14.437 dca21ec0f scripts/nvmf_perf: confirm set system settings 00:07:14.437 77f816207 scripts/nvmf_perf: modify set_pause_frames 00:07:14.437 81767f27c scripts/nvmf_perf: check all config file sections are present 00:07:14.437 166db62dc scripts/nvmf_perf: disable fio group reporting 00:07:14.437 dc3b3835d scripts/nvmf_perf: use dataclasses for collecting results data 00:07:14.456 [Pipeline] writeFile 00:07:14.472 [Pipeline] sh 00:07:14.752 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:07:14.762 [Pipeline] sh 00:07:15.041 + cat autorun-spdk.conf 00:07:15.041 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:15.041 SPDK_TEST_NVME=1 00:07:15.041 SPDK_TEST_FTL=1 00:07:15.041 SPDK_TEST_ISAL=1 00:07:15.041 SPDK_RUN_ASAN=1 00:07:15.041 SPDK_RUN_UBSAN=1 00:07:15.041 SPDK_TEST_XNVME=1 00:07:15.041 SPDK_TEST_NVME_FDP=1 00:07:15.041 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:15.048 RUN_NIGHTLY=0 00:07:15.050 [Pipeline] } 00:07:15.067 [Pipeline] // stage 00:07:15.083 [Pipeline] stage 00:07:15.086 [Pipeline] { (Run VM) 00:07:15.100 [Pipeline] sh 00:07:15.379 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:07:15.379 + echo 'Start stage prepare_nvme.sh' 00:07:15.379 Start stage prepare_nvme.sh 00:07:15.379 + [[ -n 4 ]] 00:07:15.379 + disk_prefix=ex4 00:07:15.379 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_3 ]] 00:07:15.379 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf ]] 00:07:15.379 + source /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf 00:07:15.379 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:15.379 ++ SPDK_TEST_NVME=1 00:07:15.379 ++ SPDK_TEST_FTL=1 00:07:15.379 ++ SPDK_TEST_ISAL=1 00:07:15.379 ++ SPDK_RUN_ASAN=1 00:07:15.379 ++ SPDK_RUN_UBSAN=1 00:07:15.379 ++ SPDK_TEST_XNVME=1 00:07:15.379 ++ SPDK_TEST_NVME_FDP=1 00:07:15.379 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:15.379 ++ RUN_NIGHTLY=0 00:07:15.379 + cd /var/jenkins/workspace/nvme-vg-autotest_3 00:07:15.379 + nvme_files=() 00:07:15.379 + declare -A nvme_files 00:07:15.379 + backend_dir=/var/lib/libvirt/images/backends 00:07:15.379 + nvme_files['nvme.img']=5G 00:07:15.379 + nvme_files['nvme-cmb.img']=5G 00:07:15.379 + nvme_files['nvme-multi0.img']=4G 00:07:15.379 + nvme_files['nvme-multi1.img']=4G 00:07:15.379 + nvme_files['nvme-multi2.img']=4G 00:07:15.379 + nvme_files['nvme-openstack.img']=8G 00:07:15.379 + nvme_files['nvme-zns.img']=5G 00:07:15.379 + (( SPDK_TEST_NVME_PMR == 1 )) 00:07:15.379 + (( SPDK_TEST_FTL == 1 )) 00:07:15.379 + nvme_files["nvme-ftl.img"]=6G 00:07:15.379 + (( SPDK_TEST_NVME_FDP == 1 )) 00:07:15.379 + nvme_files["nvme-fdp.img"]=1G 00:07:15.379 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:07:15.379 + for nvme in "${!nvme_files[@]}" 00:07:15.379 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:07:15.379 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:07:15.379 + for nvme in "${!nvme_files[@]}" 00:07:15.379 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:07:15.379 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:07:15.379 + for nvme in "${!nvme_files[@]}" 00:07:15.379 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:07:15.379 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:07:15.379 + for nvme in "${!nvme_files[@]}" 00:07:15.379 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:07:15.379 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:07:15.379 + for nvme in "${!nvme_files[@]}" 00:07:15.379 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:07:15.379 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:07:15.379 + for nvme in "${!nvme_files[@]}" 00:07:15.379 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:07:15.379 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:07:15.379 + for nvme in "${!nvme_files[@]}" 00:07:15.379 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:07:15.379 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:07:15.379 + for nvme in "${!nvme_files[@]}" 00:07:15.379 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:07:15.638 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:07:15.638 + for nvme in "${!nvme_files[@]}" 00:07:15.638 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:07:15.638 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:07:15.638 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:07:15.638 + echo 'End stage prepare_nvme.sh' 00:07:15.638 End stage prepare_nvme.sh 00:07:15.649 [Pipeline] sh 00:07:15.928 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:07:15.928 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:07:15.928 00:07:15.928 DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant 00:07:15.928 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk 00:07:15.928 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_3 00:07:15.928 HELP=0 00:07:15.928 DRY_RUN=0 00:07:15.928 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:07:15.928 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:07:15.928 NVME_AUTO_CREATE=0 00:07:15.928 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:07:15.928 NVME_CMB=,,,, 00:07:15.928 NVME_PMR=,,,, 00:07:15.928 NVME_ZNS=,,,, 00:07:15.928 NVME_MS=true,,,, 00:07:15.928 NVME_FDP=,,,on, 00:07:15.928 SPDK_VAGRANT_DISTRO=fedora38 00:07:15.928 SPDK_VAGRANT_VMCPU=10 00:07:15.928 SPDK_VAGRANT_VMRAM=12288 00:07:15.928 SPDK_VAGRANT_PROVIDER=libvirt 00:07:15.928 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:07:15.928 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:07:15.928 SPDK_OPENSTACK_NETWORK=0 00:07:15.928 VAGRANT_PACKAGE_BOX=0 00:07:15.928 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:07:15.928 FORCE_DISTRO=true 00:07:15.928 VAGRANT_BOX_VERSION= 00:07:15.928 EXTRA_VAGRANTFILES= 00:07:15.928 NIC_MODEL=e1000 00:07:15.928 00:07:15.928 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt' 00:07:15.928 /var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest_3 00:07:19.210 Bringing machine 'default' up with 'libvirt' provider... 00:07:20.584 ==> default: Creating image (snapshot of base box volume). 00:07:20.584 ==> default: Creating domain with the following settings... 00:07:20.584 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721840885_efc38c1a0ca6af1eba94 00:07:20.585 ==> default: -- Domain type: kvm 00:07:20.585 ==> default: -- Cpus: 10 00:07:20.585 ==> default: -- Feature: acpi 00:07:20.585 ==> default: -- Feature: apic 00:07:20.585 ==> default: -- Feature: pae 00:07:20.585 ==> default: -- Memory: 12288M 00:07:20.585 ==> default: -- Memory Backing: hugepages: 00:07:20.585 ==> default: -- Management MAC: 00:07:20.585 ==> default: -- Loader: 00:07:20.585 ==> default: -- Nvram: 00:07:20.585 ==> default: -- Base box: spdk/fedora38 00:07:20.585 ==> default: -- Storage pool: default 00:07:20.585 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721840885_efc38c1a0ca6af1eba94.img (20G) 00:07:20.585 ==> default: -- Volume Cache: default 00:07:20.585 ==> default: -- Kernel: 00:07:20.585 ==> default: -- Initrd: 00:07:20.585 ==> default: -- Graphics Type: vnc 00:07:20.585 ==> default: -- Graphics Port: -1 00:07:20.585 ==> default: -- Graphics IP: 127.0.0.1 00:07:20.585 ==> default: -- Graphics Password: Not defined 00:07:20.585 ==> default: -- Video Type: cirrus 00:07:20.585 ==> default: -- Video VRAM: 9216 00:07:20.585 ==> default: -- Sound Type: 00:07:20.585 ==> default: -- Keymap: en-us 00:07:20.585 ==> default: -- TPM Path: 00:07:20.585 ==> default: -- INPUT: type=mouse, bus=ps2 00:07:20.585 ==> default: -- Command line args: 00:07:20.585 ==> default: -> value=-device, 00:07:20.585 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:07:20.585 ==> default: -> value=-drive, 00:07:20.585 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:07:20.585 ==> default: -> value=-device, 00:07:20.585 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:07:20.585 ==> default: -> value=-device, 00:07:20.585 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:07:20.585 ==> default: -> value=-drive, 00:07:20.585 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:07:20.585 ==> default: -> value=-device, 00:07:20.585 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:20.585 ==> default: -> value=-device, 00:07:20.585 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:07:20.585 ==> default: -> value=-drive, 00:07:20.585 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:07:20.585 ==> default: -> value=-device, 00:07:20.585 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:20.585 ==> default: -> value=-drive, 00:07:20.585 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:07:20.585 ==> default: -> value=-device, 00:07:20.585 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:20.585 ==> default: -> value=-drive, 00:07:20.585 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:07:20.585 ==> default: -> value=-device, 00:07:20.585 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:20.585 ==> default: -> value=-device, 00:07:20.585 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:07:20.585 ==> default: -> value=-device, 00:07:20.585 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:07:20.585 ==> default: -> value=-drive, 00:07:20.585 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:07:20.585 ==> default: -> value=-device, 00:07:20.585 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:20.843 ==> default: Creating shared folders metadata... 00:07:20.843 ==> default: Starting domain. 00:07:22.755 ==> default: Waiting for domain to get an IP address... 00:07:37.653 ==> default: Waiting for SSH to become available... 00:07:39.026 ==> default: Configuring and enabling network interfaces... 00:07:43.209 default: SSH address: 192.168.121.35:22 00:07:43.209 default: SSH username: vagrant 00:07:43.209 default: SSH auth method: private key 00:07:45.789 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:07:53.907 ==> default: Mounting SSHFS shared folder... 00:07:54.473 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:07:54.473 ==> default: Checking Mount.. 00:07:55.848 ==> default: Folder Successfully Mounted! 00:07:55.848 ==> default: Running provisioner: file... 00:07:56.416 default: ~/.gitconfig => .gitconfig 00:07:56.674 00:07:56.674 SUCCESS! 00:07:56.674 00:07:56.674 cd to /var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt and type "vagrant ssh" to use. 00:07:56.674 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:07:56.674 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt" to destroy all trace of vm. 00:07:56.674 00:07:56.683 [Pipeline] } 00:07:56.702 [Pipeline] // stage 00:07:56.711 [Pipeline] dir 00:07:56.712 Running in /var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt 00:07:56.714 [Pipeline] { 00:07:56.728 [Pipeline] catchError 00:07:56.731 [Pipeline] { 00:07:56.746 [Pipeline] sh 00:07:57.026 + vagrant ssh-config --host vagrant 00:07:57.026 + sed -ne /^Host/,$p 00:07:57.026 + tee ssh_conf 00:08:00.313 Host vagrant 00:08:00.313 HostName 192.168.121.35 00:08:00.313 User vagrant 00:08:00.313 Port 22 00:08:00.313 UserKnownHostsFile /dev/null 00:08:00.313 StrictHostKeyChecking no 00:08:00.313 PasswordAuthentication no 00:08:00.313 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:08:00.313 IdentitiesOnly yes 00:08:00.313 LogLevel FATAL 00:08:00.313 ForwardAgent yes 00:08:00.313 ForwardX11 yes 00:08:00.313 00:08:00.327 [Pipeline] withEnv 00:08:00.330 [Pipeline] { 00:08:00.346 [Pipeline] sh 00:08:00.626 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:08:00.626 source /etc/os-release 00:08:00.626 [[ -e /image.version ]] && img=$(< /image.version) 00:08:00.626 # Minimal, systemd-like check. 00:08:00.626 if [[ -e /.dockerenv ]]; then 00:08:00.626 # Clear garbage from the node's name: 00:08:00.626 # agt-er_autotest_547-896 -> autotest_547-896 00:08:00.626 # $HOSTNAME is the actual container id 00:08:00.626 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:08:00.626 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:08:00.626 # We can assume this is a mount from a host where container is running, 00:08:00.626 # so fetch its hostname to easily identify the target swarm worker. 00:08:00.626 container="$(< /etc/hostname) ($agent)" 00:08:00.626 else 00:08:00.626 # Fallback 00:08:00.626 container=$agent 00:08:00.626 fi 00:08:00.626 fi 00:08:00.626 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:08:00.626 00:08:00.637 [Pipeline] } 00:08:00.655 [Pipeline] // withEnv 00:08:00.663 [Pipeline] setCustomBuildProperty 00:08:00.677 [Pipeline] stage 00:08:00.680 [Pipeline] { (Tests) 00:08:00.698 [Pipeline] sh 00:08:00.977 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:08:00.992 [Pipeline] sh 00:08:01.271 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:08:01.588 [Pipeline] timeout 00:08:01.589 Timeout set to expire in 40 min 00:08:01.591 [Pipeline] { 00:08:01.605 [Pipeline] sh 00:08:01.883 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:08:02.449 HEAD is now at dca21ec0f scripts/nvmf_perf: confirm set system settings 00:08:02.463 [Pipeline] sh 00:08:02.736 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:08:03.008 [Pipeline] sh 00:08:03.288 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:08:03.559 [Pipeline] sh 00:08:03.835 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:08:04.093 ++ readlink -f spdk_repo 00:08:04.093 + DIR_ROOT=/home/vagrant/spdk_repo 00:08:04.093 + [[ -n /home/vagrant/spdk_repo ]] 00:08:04.093 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:08:04.093 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:08:04.093 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:08:04.093 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:08:04.093 + [[ -d /home/vagrant/spdk_repo/output ]] 00:08:04.093 + [[ nvme-vg-autotest == pkgdep-* ]] 00:08:04.093 + cd /home/vagrant/spdk_repo 00:08:04.093 + source /etc/os-release 00:08:04.093 ++ NAME='Fedora Linux' 00:08:04.093 ++ VERSION='38 (Cloud Edition)' 00:08:04.093 ++ ID=fedora 00:08:04.093 ++ VERSION_ID=38 00:08:04.093 ++ VERSION_CODENAME= 00:08:04.093 ++ PLATFORM_ID=platform:f38 00:08:04.093 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:08:04.093 ++ ANSI_COLOR='0;38;2;60;110;180' 00:08:04.093 ++ LOGO=fedora-logo-icon 00:08:04.093 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:08:04.093 ++ HOME_URL=https://fedoraproject.org/ 00:08:04.093 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:08:04.093 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:08:04.093 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:08:04.093 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:08:04.093 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:08:04.093 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:08:04.093 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:08:04.093 ++ SUPPORT_END=2024-05-14 00:08:04.093 ++ VARIANT='Cloud Edition' 00:08:04.093 ++ VARIANT_ID=cloud 00:08:04.093 + uname -a 00:08:04.093 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:08:04.093 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:04.392 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:04.650 Hugepages 00:08:04.650 node hugesize free / total 00:08:04.650 node0 1048576kB 0 / 0 00:08:04.650 node0 2048kB 0 / 0 00:08:04.650 00:08:04.650 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:04.650 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:04.650 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:04.650 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:08:04.650 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:08:04.650 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:08:04.650 + rm -f /tmp/spdk-ld-path 00:08:04.650 + source autorun-spdk.conf 00:08:04.650 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:04.650 ++ SPDK_TEST_NVME=1 00:08:04.650 ++ SPDK_TEST_FTL=1 00:08:04.650 ++ SPDK_TEST_ISAL=1 00:08:04.650 ++ SPDK_RUN_ASAN=1 00:08:04.650 ++ SPDK_RUN_UBSAN=1 00:08:04.650 ++ SPDK_TEST_XNVME=1 00:08:04.650 ++ SPDK_TEST_NVME_FDP=1 00:08:04.650 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:04.650 ++ RUN_NIGHTLY=0 00:08:04.650 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:08:04.650 + [[ -n '' ]] 00:08:04.650 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:08:04.909 + for M in /var/spdk/build-*-manifest.txt 00:08:04.909 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:08:04.909 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:04.909 + for M in /var/spdk/build-*-manifest.txt 00:08:04.909 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:08:04.909 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:04.909 ++ uname 00:08:04.909 + [[ Linux == \L\i\n\u\x ]] 00:08:04.909 + sudo dmesg -T 00:08:04.909 + sudo dmesg --clear 00:08:04.909 + dmesg_pid=5189 00:08:04.909 + [[ Fedora Linux == FreeBSD ]] 00:08:04.909 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:04.909 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:04.909 + sudo dmesg -Tw 00:08:04.909 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:08:04.909 + [[ -x /usr/src/fio-static/fio ]] 00:08:04.909 + export FIO_BIN=/usr/src/fio-static/fio 00:08:04.909 + FIO_BIN=/usr/src/fio-static/fio 00:08:04.909 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:08:04.909 + [[ ! -v VFIO_QEMU_BIN ]] 00:08:04.909 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:08:04.909 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:04.909 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:04.909 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:08:04.909 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:04.909 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:04.909 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:04.909 Test configuration: 00:08:04.909 SPDK_RUN_FUNCTIONAL_TEST=1 00:08:04.909 SPDK_TEST_NVME=1 00:08:04.909 SPDK_TEST_FTL=1 00:08:04.909 SPDK_TEST_ISAL=1 00:08:04.909 SPDK_RUN_ASAN=1 00:08:04.909 SPDK_RUN_UBSAN=1 00:08:04.909 SPDK_TEST_XNVME=1 00:08:04.909 SPDK_TEST_NVME_FDP=1 00:08:04.909 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:04.909 RUN_NIGHTLY=0 17:08:51 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:04.909 17:08:51 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:04.909 17:08:51 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.909 17:08:51 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.909 17:08:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.909 17:08:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.909 17:08:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.909 17:08:51 -- paths/export.sh@5 -- $ export PATH 00:08:04.909 17:08:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.909 17:08:51 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:08:04.909 17:08:51 -- common/autobuild_common.sh@447 -- $ date +%s 00:08:04.909 17:08:51 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721840931.XXXXXX 00:08:04.909 17:08:51 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721840931.ucsugx 00:08:04.909 17:08:51 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:08:04.909 17:08:51 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:08:04.909 17:08:51 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:08:04.909 17:08:51 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:08:04.909 17:08:51 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:08:04.909 17:08:51 -- common/autobuild_common.sh@463 -- $ get_config_params 00:08:04.909 17:08:51 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:08:04.909 17:08:51 -- common/autotest_common.sh@10 -- $ set +x 00:08:04.909 17:08:51 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:08:04.909 17:08:51 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:08:04.909 17:08:51 -- pm/common@17 -- $ local monitor 00:08:04.909 17:08:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:04.909 17:08:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:04.909 17:08:51 -- pm/common@25 -- $ sleep 1 00:08:04.909 17:08:51 -- pm/common@21 -- $ date +%s 00:08:04.909 17:08:51 -- pm/common@21 -- $ date +%s 00:08:04.909 17:08:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721840931 00:08:04.909 17:08:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721840931 00:08:05.168 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721840931_collect-vmstat.pm.log 00:08:05.168 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721840931_collect-cpu-load.pm.log 00:08:06.103 17:08:52 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:08:06.103 17:08:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:08:06.103 17:08:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:08:06.103 17:08:52 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:08:06.103 17:08:52 -- spdk/autobuild.sh@16 -- $ date -u 00:08:06.103 Wed Jul 24 05:08:52 PM UTC 2024 00:08:06.103 17:08:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:08:06.103 v24.09-pre-323-gdca21ec0f 00:08:06.103 17:08:52 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:08:06.103 17:08:52 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:08:06.103 17:08:52 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:08:06.103 17:08:52 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:08:06.103 17:08:52 -- common/autotest_common.sh@10 -- $ set +x 00:08:06.103 ************************************ 00:08:06.103 START TEST asan 00:08:06.103 ************************************ 00:08:06.103 using asan 00:08:06.103 17:08:52 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:08:06.103 00:08:06.103 real 0m0.000s 00:08:06.103 user 0m0.000s 00:08:06.103 sys 0m0.000s 00:08:06.103 17:08:52 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:08:06.103 17:08:52 asan -- common/autotest_common.sh@10 -- $ set +x 00:08:06.103 ************************************ 00:08:06.103 END TEST asan 00:08:06.103 ************************************ 00:08:06.103 17:08:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:08:06.103 17:08:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:08:06.103 17:08:52 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:08:06.103 17:08:52 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:08:06.103 17:08:52 -- common/autotest_common.sh@10 -- $ set +x 00:08:06.103 ************************************ 00:08:06.103 START TEST ubsan 00:08:06.103 ************************************ 00:08:06.103 using ubsan 00:08:06.103 17:08:52 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:08:06.103 00:08:06.103 real 0m0.000s 00:08:06.103 user 0m0.000s 00:08:06.103 sys 0m0.000s 00:08:06.103 17:08:52 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:08:06.103 ************************************ 00:08:06.103 END TEST ubsan 00:08:06.103 ************************************ 00:08:06.103 17:08:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:08:06.103 17:08:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:08:06.103 17:08:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:08:06.103 17:08:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:08:06.103 17:08:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:08:06.103 17:08:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:08:06.103 17:08:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:08:06.103 17:08:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:08:06.103 17:08:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:08:06.103 17:08:52 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:08:06.362 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:06.362 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:08:06.620 Using 'verbs' RDMA provider 00:08:20.189 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:08:35.087 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:08:35.087 Creating mk/config.mk...done. 00:08:35.087 Creating mk/cc.flags.mk...done. 00:08:35.087 Type 'make' to build. 00:08:35.087 17:09:19 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:08:35.087 17:09:19 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:08:35.087 17:09:19 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:08:35.087 17:09:19 -- common/autotest_common.sh@10 -- $ set +x 00:08:35.087 ************************************ 00:08:35.087 START TEST make 00:08:35.087 ************************************ 00:08:35.087 17:09:19 make -- common/autotest_common.sh@1125 -- $ make -j10 00:08:35.087 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:08:35.087 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:08:35.087 meson setup builddir \ 00:08:35.087 -Dwith-libaio=enabled \ 00:08:35.087 -Dwith-liburing=enabled \ 00:08:35.087 -Dwith-libvfn=disabled \ 00:08:35.087 -Dwith-spdk=false && \ 00:08:35.087 meson compile -C builddir && \ 00:08:35.087 cd -) 00:08:35.087 make[1]: Nothing to be done for 'all'. 00:08:36.022 The Meson build system 00:08:36.022 Version: 1.3.1 00:08:36.022 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:08:36.022 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:08:36.022 Build type: native build 00:08:36.022 Project name: xnvme 00:08:36.022 Project version: 0.7.3 00:08:36.022 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:08:36.022 C linker for the host machine: cc ld.bfd 2.39-16 00:08:36.022 Host machine cpu family: x86_64 00:08:36.022 Host machine cpu: x86_64 00:08:36.022 Message: host_machine.system: linux 00:08:36.022 Compiler for C supports arguments -Wno-missing-braces: YES 00:08:36.022 Compiler for C supports arguments -Wno-cast-function-type: YES 00:08:36.022 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:08:36.022 Run-time dependency threads found: YES 00:08:36.022 Has header "setupapi.h" : NO 00:08:36.022 Has header "linux/blkzoned.h" : YES 00:08:36.022 Has header "linux/blkzoned.h" : YES (cached) 00:08:36.022 Has header "libaio.h" : YES 00:08:36.022 Library aio found: YES 00:08:36.022 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:08:36.022 Run-time dependency liburing found: YES 2.2 00:08:36.022 Dependency libvfn skipped: feature with-libvfn disabled 00:08:36.022 Run-time dependency appleframeworks found: NO (tried framework) 00:08:36.022 Run-time dependency appleframeworks found: NO (tried framework) 00:08:36.022 Configuring xnvme_config.h using configuration 00:08:36.022 Configuring xnvme.spec using configuration 00:08:36.022 Run-time dependency bash-completion found: YES 2.11 00:08:36.022 Message: Bash-completions: /usr/share/bash-completion/completions 00:08:36.022 Program cp found: YES (/usr/bin/cp) 00:08:36.022 Has header "winsock2.h" : NO 00:08:36.022 Has header "dbghelp.h" : NO 00:08:36.022 Library rpcrt4 found: NO 00:08:36.022 Library rt found: YES 00:08:36.022 Checking for function "clock_gettime" with dependency -lrt: YES 00:08:36.022 Found CMake: /usr/bin/cmake (3.27.7) 00:08:36.022 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:08:36.022 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:08:36.022 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:08:36.022 Build targets in project: 32 00:08:36.022 00:08:36.022 xnvme 0.7.3 00:08:36.022 00:08:36.022 User defined options 00:08:36.022 with-libaio : enabled 00:08:36.022 with-liburing: enabled 00:08:36.022 with-libvfn : disabled 00:08:36.022 with-spdk : false 00:08:36.022 00:08:36.022 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:08:36.591 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:08:36.591 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:08:36.591 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:08:36.591 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:08:36.591 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:08:36.591 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:08:36.591 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:08:36.591 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:08:36.591 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:08:36.591 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:08:36.591 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:08:36.591 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:08:36.591 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:08:36.591 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:08:36.591 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:08:36.849 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:08:36.849 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:08:36.849 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:08:36.849 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:08:36.849 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:08:36.849 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:08:36.849 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:08:36.849 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:08:36.849 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:08:36.849 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:08:36.849 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:08:36.849 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:08:36.849 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:08:36.849 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:08:36.849 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:08:36.849 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:08:36.849 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:08:36.849 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:08:36.849 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:08:36.849 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:08:36.849 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:08:36.849 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:08:36.849 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:08:36.849 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:08:36.849 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:08:36.849 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:08:36.849 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:08:36.849 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:08:36.849 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:08:37.108 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:08:37.108 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:08:37.108 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:08:37.108 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:08:37.108 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:08:37.108 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:08:37.108 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:08:37.108 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:08:37.108 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:08:37.108 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:08:37.108 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:08:37.108 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:08:37.108 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:08:37.108 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:08:37.108 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:08:37.108 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:08:37.108 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:08:37.108 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:08:37.108 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:08:37.108 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:08:37.108 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:08:37.108 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:08:37.108 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:08:37.366 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:08:37.366 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:08:37.366 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:08:37.366 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:08:37.366 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:08:37.366 [72/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:08:37.366 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:08:37.366 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:08:37.366 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:08:37.366 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:08:37.366 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:08:37.366 [78/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:08:37.366 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:08:37.366 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:08:37.366 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:08:37.633 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:08:37.633 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:08:37.633 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:08:37.633 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:08:37.633 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:08:37.633 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:08:37.633 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:08:37.633 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:08:37.633 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:08:37.633 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:08:37.633 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:08:37.633 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:08:37.633 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:08:37.633 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:08:37.633 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:08:37.633 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:08:37.633 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:08:37.633 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:08:37.891 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:08:37.891 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:08:37.891 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:08:37.891 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:08:37.891 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:08:37.891 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:08:37.891 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:08:37.891 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:08:37.891 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:08:37.891 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:08:37.891 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:08:37.891 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:08:37.891 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:08:37.891 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:08:37.891 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:08:37.891 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:08:37.891 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:08:37.891 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:08:37.891 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:08:37.891 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:08:37.891 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:08:37.891 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:08:37.891 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:08:37.891 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:08:37.891 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:08:37.891 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:08:37.891 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:08:37.891 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:08:37.891 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:08:38.150 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:08:38.150 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:08:38.150 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:08:38.150 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:08:38.150 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:08:38.150 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:08:38.150 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:08:38.150 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:08:38.150 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:08:38.150 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:08:38.150 [139/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:08:38.150 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:08:38.150 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:08:38.150 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:08:38.150 [143/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:08:38.408 [144/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:08:38.408 [145/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:08:38.408 [146/203] Linking target lib/libxnvme.so 00:08:38.408 [147/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:08:38.408 [148/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:08:38.408 [149/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:08:38.408 [150/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:08:38.408 [151/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:08:38.408 [152/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:08:38.408 [153/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:08:38.408 [154/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:08:38.408 [155/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:08:38.408 [156/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:08:38.408 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:08:38.667 [158/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:08:38.667 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:08:38.667 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:08:38.667 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:08:38.667 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:08:38.667 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:08:38.667 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:08:38.667 [165/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:08:38.667 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:08:38.667 [167/203] Compiling C object tools/kvs.p/kvs.c.o 00:08:38.667 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:08:38.925 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:08:38.925 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:08:38.925 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:08:38.925 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:08:38.925 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:08:38.925 [174/203] Linking static target lib/libxnvme.a 00:08:38.925 [175/203] Linking target tests/xnvme_tests_enum 00:08:38.925 [176/203] Linking target tests/xnvme_tests_ioworker 00:08:38.925 [177/203] Linking target tests/xnvme_tests_xnvme_cli 00:08:39.183 [178/203] Linking target tests/xnvme_tests_lblk 00:08:39.183 [179/203] Linking target tests/xnvme_tests_xnvme_file 00:08:39.183 [180/203] Linking target tests/xnvme_tests_znd_state 00:08:39.183 [181/203] Linking target tests/xnvme_tests_znd_explicit_open 00:08:39.183 [182/203] Linking target tests/xnvme_tests_scc 00:08:39.183 [183/203] Linking target tests/xnvme_tests_buf 00:08:39.183 [184/203] Linking target tests/xnvme_tests_async_intf 00:08:39.183 [185/203] Linking target tests/xnvme_tests_cli 00:08:39.183 [186/203] Linking target tests/xnvme_tests_znd_zrwa 00:08:39.183 [187/203] Linking target tests/xnvme_tests_kvs 00:08:39.183 [188/203] Linking target tests/xnvme_tests_znd_append 00:08:39.183 [189/203] Linking target tools/lblk 00:08:39.183 [190/203] Linking target tests/xnvme_tests_map 00:08:39.183 [191/203] Linking target tools/xnvme 00:08:39.183 [192/203] Linking target tools/xnvme_file 00:08:39.183 [193/203] Linking target tools/xdd 00:08:39.183 [194/203] Linking target tools/kvs 00:08:39.183 [195/203] Linking target tools/zoned 00:08:39.184 [196/203] Linking target examples/xnvme_enum 00:08:39.184 [197/203] Linking target examples/xnvme_dev 00:08:39.184 [198/203] Linking target examples/xnvme_hello 00:08:39.184 [199/203] Linking target examples/xnvme_io_async 00:08:39.184 [200/203] Linking target examples/zoned_io_sync 00:08:39.184 [201/203] Linking target examples/xnvme_single_sync 00:08:39.184 [202/203] Linking target examples/zoned_io_async 00:08:39.184 [203/203] Linking target examples/xnvme_single_async 00:08:39.184 INFO: autodetecting backend as ninja 00:08:39.184 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:08:39.184 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:08:47.302 The Meson build system 00:08:47.302 Version: 1.3.1 00:08:47.302 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:08:47.302 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:08:47.302 Build type: native build 00:08:47.302 Program cat found: YES (/usr/bin/cat) 00:08:47.302 Project name: DPDK 00:08:47.302 Project version: 24.03.0 00:08:47.302 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:08:47.302 C linker for the host machine: cc ld.bfd 2.39-16 00:08:47.302 Host machine cpu family: x86_64 00:08:47.302 Host machine cpu: x86_64 00:08:47.302 Message: ## Building in Developer Mode ## 00:08:47.302 Program pkg-config found: YES (/usr/bin/pkg-config) 00:08:47.302 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:08:47.302 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:08:47.302 Program python3 found: YES (/usr/bin/python3) 00:08:47.302 Program cat found: YES (/usr/bin/cat) 00:08:47.302 Compiler for C supports arguments -march=native: YES 00:08:47.302 Checking for size of "void *" : 8 00:08:47.302 Checking for size of "void *" : 8 (cached) 00:08:47.302 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:08:47.302 Library m found: YES 00:08:47.302 Library numa found: YES 00:08:47.302 Has header "numaif.h" : YES 00:08:47.302 Library fdt found: NO 00:08:47.302 Library execinfo found: NO 00:08:47.302 Has header "execinfo.h" : YES 00:08:47.302 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:08:47.302 Run-time dependency libarchive found: NO (tried pkgconfig) 00:08:47.302 Run-time dependency libbsd found: NO (tried pkgconfig) 00:08:47.302 Run-time dependency jansson found: NO (tried pkgconfig) 00:08:47.302 Run-time dependency openssl found: YES 3.0.9 00:08:47.302 Run-time dependency libpcap found: YES 1.10.4 00:08:47.302 Has header "pcap.h" with dependency libpcap: YES 00:08:47.302 Compiler for C supports arguments -Wcast-qual: YES 00:08:47.302 Compiler for C supports arguments -Wdeprecated: YES 00:08:47.302 Compiler for C supports arguments -Wformat: YES 00:08:47.302 Compiler for C supports arguments -Wformat-nonliteral: NO 00:08:47.302 Compiler for C supports arguments -Wformat-security: NO 00:08:47.302 Compiler for C supports arguments -Wmissing-declarations: YES 00:08:47.302 Compiler for C supports arguments -Wmissing-prototypes: YES 00:08:47.302 Compiler for C supports arguments -Wnested-externs: YES 00:08:47.302 Compiler for C supports arguments -Wold-style-definition: YES 00:08:47.302 Compiler for C supports arguments -Wpointer-arith: YES 00:08:47.302 Compiler for C supports arguments -Wsign-compare: YES 00:08:47.302 Compiler for C supports arguments -Wstrict-prototypes: YES 00:08:47.302 Compiler for C supports arguments -Wundef: YES 00:08:47.302 Compiler for C supports arguments -Wwrite-strings: YES 00:08:47.302 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:08:47.302 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:08:47.302 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:08:47.302 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:08:47.302 Program objdump found: YES (/usr/bin/objdump) 00:08:47.302 Compiler for C supports arguments -mavx512f: YES 00:08:47.303 Checking if "AVX512 checking" compiles: YES 00:08:47.303 Fetching value of define "__SSE4_2__" : 1 00:08:47.303 Fetching value of define "__AES__" : 1 00:08:47.303 Fetching value of define "__AVX__" : 1 00:08:47.303 Fetching value of define "__AVX2__" : 1 00:08:47.303 Fetching value of define "__AVX512BW__" : (undefined) 00:08:47.303 Fetching value of define "__AVX512CD__" : (undefined) 00:08:47.303 Fetching value of define "__AVX512DQ__" : (undefined) 00:08:47.303 Fetching value of define "__AVX512F__" : (undefined) 00:08:47.303 Fetching value of define "__AVX512VL__" : (undefined) 00:08:47.303 Fetching value of define "__PCLMUL__" : 1 00:08:47.303 Fetching value of define "__RDRND__" : 1 00:08:47.303 Fetching value of define "__RDSEED__" : 1 00:08:47.303 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:08:47.303 Fetching value of define "__znver1__" : (undefined) 00:08:47.303 Fetching value of define "__znver2__" : (undefined) 00:08:47.303 Fetching value of define "__znver3__" : (undefined) 00:08:47.303 Fetching value of define "__znver4__" : (undefined) 00:08:47.303 Library asan found: YES 00:08:47.303 Compiler for C supports arguments -Wno-format-truncation: YES 00:08:47.303 Message: lib/log: Defining dependency "log" 00:08:47.303 Message: lib/kvargs: Defining dependency "kvargs" 00:08:47.303 Message: lib/telemetry: Defining dependency "telemetry" 00:08:47.303 Library rt found: YES 00:08:47.303 Checking for function "getentropy" : NO 00:08:47.303 Message: lib/eal: Defining dependency "eal" 00:08:47.303 Message: lib/ring: Defining dependency "ring" 00:08:47.303 Message: lib/rcu: Defining dependency "rcu" 00:08:47.303 Message: lib/mempool: Defining dependency "mempool" 00:08:47.303 Message: lib/mbuf: Defining dependency "mbuf" 00:08:47.303 Fetching value of define "__PCLMUL__" : 1 (cached) 00:08:47.303 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:08:47.303 Compiler for C supports arguments -mpclmul: YES 00:08:47.303 Compiler for C supports arguments -maes: YES 00:08:47.303 Compiler for C supports arguments -mavx512f: YES (cached) 00:08:47.303 Compiler for C supports arguments -mavx512bw: YES 00:08:47.303 Compiler for C supports arguments -mavx512dq: YES 00:08:47.303 Compiler for C supports arguments -mavx512vl: YES 00:08:47.303 Compiler for C supports arguments -mvpclmulqdq: YES 00:08:47.303 Compiler for C supports arguments -mavx2: YES 00:08:47.303 Compiler for C supports arguments -mavx: YES 00:08:47.303 Message: lib/net: Defining dependency "net" 00:08:47.303 Message: lib/meter: Defining dependency "meter" 00:08:47.303 Message: lib/ethdev: Defining dependency "ethdev" 00:08:47.303 Message: lib/pci: Defining dependency "pci" 00:08:47.303 Message: lib/cmdline: Defining dependency "cmdline" 00:08:47.303 Message: lib/hash: Defining dependency "hash" 00:08:47.303 Message: lib/timer: Defining dependency "timer" 00:08:47.303 Message: lib/compressdev: Defining dependency "compressdev" 00:08:47.303 Message: lib/cryptodev: Defining dependency "cryptodev" 00:08:47.303 Message: lib/dmadev: Defining dependency "dmadev" 00:08:47.303 Compiler for C supports arguments -Wno-cast-qual: YES 00:08:47.303 Message: lib/power: Defining dependency "power" 00:08:47.303 Message: lib/reorder: Defining dependency "reorder" 00:08:47.303 Message: lib/security: Defining dependency "security" 00:08:47.303 Has header "linux/userfaultfd.h" : YES 00:08:47.303 Has header "linux/vduse.h" : YES 00:08:47.303 Message: lib/vhost: Defining dependency "vhost" 00:08:47.303 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:08:47.303 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:08:47.303 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:08:47.303 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:08:47.303 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:08:47.303 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:08:47.303 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:08:47.303 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:08:47.303 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:08:47.303 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:08:47.303 Program doxygen found: YES (/usr/bin/doxygen) 00:08:47.303 Configuring doxy-api-html.conf using configuration 00:08:47.304 Configuring doxy-api-man.conf using configuration 00:08:47.304 Program mandb found: YES (/usr/bin/mandb) 00:08:47.304 Program sphinx-build found: NO 00:08:47.304 Configuring rte_build_config.h using configuration 00:08:47.304 Message: 00:08:47.304 ================= 00:08:47.304 Applications Enabled 00:08:47.304 ================= 00:08:47.304 00:08:47.304 apps: 00:08:47.304 00:08:47.304 00:08:47.304 Message: 00:08:47.304 ================= 00:08:47.304 Libraries Enabled 00:08:47.304 ================= 00:08:47.304 00:08:47.304 libs: 00:08:47.304 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:08:47.304 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:08:47.304 cryptodev, dmadev, power, reorder, security, vhost, 00:08:47.304 00:08:47.304 Message: 00:08:47.304 =============== 00:08:47.304 Drivers Enabled 00:08:47.304 =============== 00:08:47.304 00:08:47.304 common: 00:08:47.304 00:08:47.304 bus: 00:08:47.304 pci, vdev, 00:08:47.304 mempool: 00:08:47.304 ring, 00:08:47.304 dma: 00:08:47.304 00:08:47.304 net: 00:08:47.304 00:08:47.304 crypto: 00:08:47.304 00:08:47.304 compress: 00:08:47.304 00:08:47.304 vdpa: 00:08:47.304 00:08:47.304 00:08:47.304 Message: 00:08:47.304 ================= 00:08:47.304 Content Skipped 00:08:47.304 ================= 00:08:47.304 00:08:47.304 apps: 00:08:47.304 dumpcap: explicitly disabled via build config 00:08:47.304 graph: explicitly disabled via build config 00:08:47.304 pdump: explicitly disabled via build config 00:08:47.304 proc-info: explicitly disabled via build config 00:08:47.304 test-acl: explicitly disabled via build config 00:08:47.304 test-bbdev: explicitly disabled via build config 00:08:47.304 test-cmdline: explicitly disabled via build config 00:08:47.304 test-compress-perf: explicitly disabled via build config 00:08:47.304 test-crypto-perf: explicitly disabled via build config 00:08:47.304 test-dma-perf: explicitly disabled via build config 00:08:47.304 test-eventdev: explicitly disabled via build config 00:08:47.304 test-fib: explicitly disabled via build config 00:08:47.304 test-flow-perf: explicitly disabled via build config 00:08:47.304 test-gpudev: explicitly disabled via build config 00:08:47.304 test-mldev: explicitly disabled via build config 00:08:47.304 test-pipeline: explicitly disabled via build config 00:08:47.304 test-pmd: explicitly disabled via build config 00:08:47.304 test-regex: explicitly disabled via build config 00:08:47.304 test-sad: explicitly disabled via build config 00:08:47.304 test-security-perf: explicitly disabled via build config 00:08:47.304 00:08:47.304 libs: 00:08:47.304 argparse: explicitly disabled via build config 00:08:47.304 metrics: explicitly disabled via build config 00:08:47.304 acl: explicitly disabled via build config 00:08:47.304 bbdev: explicitly disabled via build config 00:08:47.304 bitratestats: explicitly disabled via build config 00:08:47.304 bpf: explicitly disabled via build config 00:08:47.304 cfgfile: explicitly disabled via build config 00:08:47.304 distributor: explicitly disabled via build config 00:08:47.304 efd: explicitly disabled via build config 00:08:47.304 eventdev: explicitly disabled via build config 00:08:47.304 dispatcher: explicitly disabled via build config 00:08:47.304 gpudev: explicitly disabled via build config 00:08:47.304 gro: explicitly disabled via build config 00:08:47.304 gso: explicitly disabled via build config 00:08:47.304 ip_frag: explicitly disabled via build config 00:08:47.304 jobstats: explicitly disabled via build config 00:08:47.305 latencystats: explicitly disabled via build config 00:08:47.305 lpm: explicitly disabled via build config 00:08:47.305 member: explicitly disabled via build config 00:08:47.305 pcapng: explicitly disabled via build config 00:08:47.305 rawdev: explicitly disabled via build config 00:08:47.305 regexdev: explicitly disabled via build config 00:08:47.305 mldev: explicitly disabled via build config 00:08:47.305 rib: explicitly disabled via build config 00:08:47.305 sched: explicitly disabled via build config 00:08:47.305 stack: explicitly disabled via build config 00:08:47.305 ipsec: explicitly disabled via build config 00:08:47.305 pdcp: explicitly disabled via build config 00:08:47.305 fib: explicitly disabled via build config 00:08:47.305 port: explicitly disabled via build config 00:08:47.305 pdump: explicitly disabled via build config 00:08:47.305 table: explicitly disabled via build config 00:08:47.305 pipeline: explicitly disabled via build config 00:08:47.305 graph: explicitly disabled via build config 00:08:47.305 node: explicitly disabled via build config 00:08:47.305 00:08:47.305 drivers: 00:08:47.305 common/cpt: not in enabled drivers build config 00:08:47.305 common/dpaax: not in enabled drivers build config 00:08:47.305 common/iavf: not in enabled drivers build config 00:08:47.305 common/idpf: not in enabled drivers build config 00:08:47.305 common/ionic: not in enabled drivers build config 00:08:47.305 common/mvep: not in enabled drivers build config 00:08:47.305 common/octeontx: not in enabled drivers build config 00:08:47.305 bus/auxiliary: not in enabled drivers build config 00:08:47.305 bus/cdx: not in enabled drivers build config 00:08:47.305 bus/dpaa: not in enabled drivers build config 00:08:47.305 bus/fslmc: not in enabled drivers build config 00:08:47.305 bus/ifpga: not in enabled drivers build config 00:08:47.305 bus/platform: not in enabled drivers build config 00:08:47.305 bus/uacce: not in enabled drivers build config 00:08:47.305 bus/vmbus: not in enabled drivers build config 00:08:47.305 common/cnxk: not in enabled drivers build config 00:08:47.305 common/mlx5: not in enabled drivers build config 00:08:47.305 common/nfp: not in enabled drivers build config 00:08:47.305 common/nitrox: not in enabled drivers build config 00:08:47.305 common/qat: not in enabled drivers build config 00:08:47.305 common/sfc_efx: not in enabled drivers build config 00:08:47.305 mempool/bucket: not in enabled drivers build config 00:08:47.305 mempool/cnxk: not in enabled drivers build config 00:08:47.305 mempool/dpaa: not in enabled drivers build config 00:08:47.305 mempool/dpaa2: not in enabled drivers build config 00:08:47.305 mempool/octeontx: not in enabled drivers build config 00:08:47.305 mempool/stack: not in enabled drivers build config 00:08:47.305 dma/cnxk: not in enabled drivers build config 00:08:47.305 dma/dpaa: not in enabled drivers build config 00:08:47.305 dma/dpaa2: not in enabled drivers build config 00:08:47.305 dma/hisilicon: not in enabled drivers build config 00:08:47.305 dma/idxd: not in enabled drivers build config 00:08:47.305 dma/ioat: not in enabled drivers build config 00:08:47.305 dma/skeleton: not in enabled drivers build config 00:08:47.305 net/af_packet: not in enabled drivers build config 00:08:47.305 net/af_xdp: not in enabled drivers build config 00:08:47.305 net/ark: not in enabled drivers build config 00:08:47.305 net/atlantic: not in enabled drivers build config 00:08:47.305 net/avp: not in enabled drivers build config 00:08:47.305 net/axgbe: not in enabled drivers build config 00:08:47.305 net/bnx2x: not in enabled drivers build config 00:08:47.305 net/bnxt: not in enabled drivers build config 00:08:47.305 net/bonding: not in enabled drivers build config 00:08:47.305 net/cnxk: not in enabled drivers build config 00:08:47.305 net/cpfl: not in enabled drivers build config 00:08:47.305 net/cxgbe: not in enabled drivers build config 00:08:47.305 net/dpaa: not in enabled drivers build config 00:08:47.305 net/dpaa2: not in enabled drivers build config 00:08:47.305 net/e1000: not in enabled drivers build config 00:08:47.305 net/ena: not in enabled drivers build config 00:08:47.305 net/enetc: not in enabled drivers build config 00:08:47.305 net/enetfec: not in enabled drivers build config 00:08:47.305 net/enic: not in enabled drivers build config 00:08:47.305 net/failsafe: not in enabled drivers build config 00:08:47.305 net/fm10k: not in enabled drivers build config 00:08:47.305 net/gve: not in enabled drivers build config 00:08:47.305 net/hinic: not in enabled drivers build config 00:08:47.305 net/hns3: not in enabled drivers build config 00:08:47.305 net/i40e: not in enabled drivers build config 00:08:47.305 net/iavf: not in enabled drivers build config 00:08:47.305 net/ice: not in enabled drivers build config 00:08:47.305 net/idpf: not in enabled drivers build config 00:08:47.305 net/igc: not in enabled drivers build config 00:08:47.305 net/ionic: not in enabled drivers build config 00:08:47.305 net/ipn3ke: not in enabled drivers build config 00:08:47.305 net/ixgbe: not in enabled drivers build config 00:08:47.306 net/mana: not in enabled drivers build config 00:08:47.306 net/memif: not in enabled drivers build config 00:08:47.306 net/mlx4: not in enabled drivers build config 00:08:47.306 net/mlx5: not in enabled drivers build config 00:08:47.306 net/mvneta: not in enabled drivers build config 00:08:47.306 net/mvpp2: not in enabled drivers build config 00:08:47.306 net/netvsc: not in enabled drivers build config 00:08:47.306 net/nfb: not in enabled drivers build config 00:08:47.306 net/nfp: not in enabled drivers build config 00:08:47.306 net/ngbe: not in enabled drivers build config 00:08:47.306 net/null: not in enabled drivers build config 00:08:47.306 net/octeontx: not in enabled drivers build config 00:08:47.306 net/octeon_ep: not in enabled drivers build config 00:08:47.306 net/pcap: not in enabled drivers build config 00:08:47.306 net/pfe: not in enabled drivers build config 00:08:47.306 net/qede: not in enabled drivers build config 00:08:47.306 net/ring: not in enabled drivers build config 00:08:47.306 net/sfc: not in enabled drivers build config 00:08:47.306 net/softnic: not in enabled drivers build config 00:08:47.306 net/tap: not in enabled drivers build config 00:08:47.306 net/thunderx: not in enabled drivers build config 00:08:47.306 net/txgbe: not in enabled drivers build config 00:08:47.306 net/vdev_netvsc: not in enabled drivers build config 00:08:47.306 net/vhost: not in enabled drivers build config 00:08:47.306 net/virtio: not in enabled drivers build config 00:08:47.306 net/vmxnet3: not in enabled drivers build config 00:08:47.306 raw/*: missing internal dependency, "rawdev" 00:08:47.306 crypto/armv8: not in enabled drivers build config 00:08:47.306 crypto/bcmfs: not in enabled drivers build config 00:08:47.306 crypto/caam_jr: not in enabled drivers build config 00:08:47.306 crypto/ccp: not in enabled drivers build config 00:08:47.306 crypto/cnxk: not in enabled drivers build config 00:08:47.306 crypto/dpaa_sec: not in enabled drivers build config 00:08:47.306 crypto/dpaa2_sec: not in enabled drivers build config 00:08:47.306 crypto/ipsec_mb: not in enabled drivers build config 00:08:47.306 crypto/mlx5: not in enabled drivers build config 00:08:47.306 crypto/mvsam: not in enabled drivers build config 00:08:47.306 crypto/nitrox: not in enabled drivers build config 00:08:47.306 crypto/null: not in enabled drivers build config 00:08:47.306 crypto/octeontx: not in enabled drivers build config 00:08:47.306 crypto/openssl: not in enabled drivers build config 00:08:47.306 crypto/scheduler: not in enabled drivers build config 00:08:47.306 crypto/uadk: not in enabled drivers build config 00:08:47.306 crypto/virtio: not in enabled drivers build config 00:08:47.306 compress/isal: not in enabled drivers build config 00:08:47.306 compress/mlx5: not in enabled drivers build config 00:08:47.306 compress/nitrox: not in enabled drivers build config 00:08:47.306 compress/octeontx: not in enabled drivers build config 00:08:47.306 compress/zlib: not in enabled drivers build config 00:08:47.306 regex/*: missing internal dependency, "regexdev" 00:08:47.306 ml/*: missing internal dependency, "mldev" 00:08:47.306 vdpa/ifc: not in enabled drivers build config 00:08:47.306 vdpa/mlx5: not in enabled drivers build config 00:08:47.306 vdpa/nfp: not in enabled drivers build config 00:08:47.306 vdpa/sfc: not in enabled drivers build config 00:08:47.306 event/*: missing internal dependency, "eventdev" 00:08:47.306 baseband/*: missing internal dependency, "bbdev" 00:08:47.306 gpu/*: missing internal dependency, "gpudev" 00:08:47.306 00:08:47.306 00:08:47.306 Build targets in project: 85 00:08:47.306 00:08:47.306 DPDK 24.03.0 00:08:47.306 00:08:47.306 User defined options 00:08:47.306 buildtype : debug 00:08:47.306 default_library : shared 00:08:47.306 libdir : lib 00:08:47.307 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:08:47.307 b_sanitize : address 00:08:47.307 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:08:47.307 c_link_args : 00:08:47.307 cpu_instruction_set: native 00:08:47.307 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:08:47.307 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:08:47.307 enable_docs : false 00:08:47.307 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:08:47.307 enable_kmods : false 00:08:47.307 max_lcores : 128 00:08:47.307 tests : false 00:08:47.307 00:08:47.307 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:08:47.307 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:08:47.307 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:08:47.307 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:08:47.307 [3/268] Linking static target lib/librte_kvargs.a 00:08:47.307 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:08:47.307 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:08:47.307 [6/268] Linking static target lib/librte_log.a 00:08:47.307 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:08:47.307 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:08:47.565 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:08:47.565 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:08:47.823 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:08:47.823 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:08:47.823 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:08:47.823 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:08:47.823 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:08:47.823 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:08:47.823 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:08:47.823 [18/268] Linking target lib/librte_log.so.24.1 00:08:47.823 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:08:47.823 [20/268] Linking static target lib/librte_telemetry.a 00:08:48.080 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:08:48.339 [22/268] Linking target lib/librte_kvargs.so.24.1 00:08:48.339 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:08:48.339 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:08:48.339 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:08:48.597 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:08:48.597 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:08:48.597 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:08:48.597 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:08:48.597 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:08:48.856 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:08:48.856 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:08:48.856 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:08:48.856 [34/268] Linking target lib/librte_telemetry.so.24.1 00:08:49.114 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:08:49.114 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:08:49.373 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:08:49.373 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:08:49.373 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:08:49.373 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:08:49.373 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:08:49.373 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:08:49.373 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:08:49.631 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:08:49.631 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:08:49.631 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:08:49.890 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:08:49.890 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:08:50.149 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:08:50.149 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:08:50.407 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:08:50.407 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:08:50.407 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:08:50.666 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:08:50.666 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:08:50.666 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:08:50.666 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:08:50.924 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:08:50.924 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:08:50.924 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:08:51.183 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:08:51.183 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:08:51.183 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:08:51.441 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:08:51.441 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:08:51.441 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:08:51.441 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:08:51.699 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:08:51.699 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:08:51.957 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:08:52.215 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:08:52.215 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:08:52.215 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:08:52.215 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:08:52.215 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:08:52.215 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:08:52.473 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:08:52.473 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:08:52.473 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:08:52.473 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:08:52.732 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:08:52.991 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:08:52.991 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:08:53.250 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:08:53.250 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:08:53.250 [86/268] Linking static target lib/librte_eal.a 00:08:53.250 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:08:53.250 [88/268] Linking static target lib/librte_ring.a 00:08:53.507 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:08:53.507 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:08:53.507 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:08:53.507 [92/268] Linking static target lib/librte_rcu.a 00:08:53.507 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:08:53.766 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:08:53.766 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:08:53.766 [96/268] Linking static target lib/librte_mempool.a 00:08:53.766 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:08:54.025 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:08:54.025 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:08:54.025 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:08:54.025 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:08:54.284 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:08:54.284 [103/268] Linking static target lib/librte_mbuf.a 00:08:54.284 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:08:54.284 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:08:54.284 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:08:54.589 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:08:54.847 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:08:54.847 [109/268] Linking static target lib/librte_meter.a 00:08:54.847 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:08:54.847 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:08:54.847 [112/268] Linking static target lib/librte_net.a 00:08:54.847 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:08:55.105 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:08:55.105 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:08:55.363 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:08:55.363 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:08:55.363 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:08:55.363 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:55.622 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:08:55.622 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:08:55.882 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:08:56.140 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:08:56.140 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:08:56.140 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:08:56.399 [126/268] Linking static target lib/librte_pci.a 00:08:56.399 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:08:56.399 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:08:56.399 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:08:56.399 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:08:56.399 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:08:56.658 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:08:56.658 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:56.658 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:08:56.658 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:08:56.658 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:08:56.658 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:08:56.917 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:08:56.917 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:08:56.917 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:08:56.917 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:08:56.917 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:08:56.917 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:08:56.917 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:08:57.176 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:08:57.176 [146/268] Linking static target lib/librte_cmdline.a 00:08:57.176 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:08:57.434 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:08:57.434 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:08:57.692 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:08:57.692 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:08:57.951 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:08:57.951 [153/268] Linking static target lib/librte_timer.a 00:08:57.951 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:08:57.951 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:08:57.951 [156/268] Linking static target lib/librte_ethdev.a 00:08:58.214 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:08:58.214 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:08:58.214 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:08:58.472 [160/268] Linking static target lib/librte_compressdev.a 00:08:58.473 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:08:58.473 [162/268] Linking static target lib/librte_hash.a 00:08:58.473 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:08:58.473 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:08:58.473 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:08:58.731 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:08:58.991 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:08:58.991 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:08:58.991 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:08:58.991 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:08:58.991 [171/268] Linking static target lib/librte_dmadev.a 00:08:58.991 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:08:59.250 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:59.250 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:08:59.508 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:08:59.508 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:08:59.508 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:08:59.768 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:08:59.768 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:08:59.768 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:08:59.768 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:08:59.768 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:09:00.027 [183/268] Linking static target lib/librte_cryptodev.a 00:09:00.027 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:00.286 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:09:00.286 [186/268] Linking static target lib/librte_power.a 00:09:00.286 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:09:00.286 [188/268] Linking static target lib/librte_reorder.a 00:09:00.546 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:09:00.546 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:09:00.546 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:09:00.546 [192/268] Linking static target lib/librte_security.a 00:09:00.546 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:09:00.805 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:09:01.064 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:09:01.064 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:09:01.064 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:09:01.323 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:09:01.323 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:09:01.583 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:09:01.583 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:01.583 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:09:01.842 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:09:01.842 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:09:01.842 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:09:02.101 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:09:02.101 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:09:02.360 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:09:02.360 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:09:02.360 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:09:02.360 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:09:02.360 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:09:02.360 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:09:02.360 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:09:02.619 [215/268] Linking static target drivers/librte_bus_vdev.a 00:09:02.619 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:09:02.619 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:09:02.619 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:09:02.619 [219/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:09:02.619 [220/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:09:02.619 [221/268] Linking static target drivers/librte_bus_pci.a 00:09:02.619 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:02.619 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:09:02.878 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:09:02.878 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:09:02.878 [226/268] Linking static target drivers/librte_mempool_ring.a 00:09:03.136 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:09:03.703 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:09:03.703 [229/268] Linking target lib/librte_eal.so.24.1 00:09:03.961 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:09:03.961 [231/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:09:03.961 [232/268] Linking target lib/librte_meter.so.24.1 00:09:03.961 [233/268] Linking target lib/librte_ring.so.24.1 00:09:03.961 [234/268] Linking target lib/librte_timer.so.24.1 00:09:03.961 [235/268] Linking target lib/librte_pci.so.24.1 00:09:03.961 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:09:03.961 [237/268] Linking target lib/librte_dmadev.so.24.1 00:09:03.961 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:09:03.961 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:09:03.961 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:09:03.961 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:09:04.220 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:09:04.220 [243/268] Linking target lib/librte_rcu.so.24.1 00:09:04.220 [244/268] Linking target lib/librte_mempool.so.24.1 00:09:04.220 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:09:04.220 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:09:04.220 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:09:04.478 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:09:04.478 [249/268] Linking target lib/librte_mbuf.so.24.1 00:09:04.478 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:09:04.478 [251/268] Linking target lib/librte_compressdev.so.24.1 00:09:04.478 [252/268] Linking target lib/librte_net.so.24.1 00:09:04.478 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:09:04.478 [254/268] Linking target lib/librte_reorder.so.24.1 00:09:04.736 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:09:04.736 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:09:04.736 [257/268] Linking target lib/librte_cmdline.so.24.1 00:09:04.736 [258/268] Linking target lib/librte_hash.so.24.1 00:09:04.736 [259/268] Linking target lib/librte_security.so.24.1 00:09:04.995 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:04.995 [261/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:09:04.995 [262/268] Linking target lib/librte_ethdev.so.24.1 00:09:05.254 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:09:05.254 [264/268] Linking target lib/librte_power.so.24.1 00:09:07.792 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:09:07.792 [266/268] Linking static target lib/librte_vhost.a 00:09:09.167 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:09:09.425 [268/268] Linking target lib/librte_vhost.so.24.1 00:09:09.425 INFO: autodetecting backend as ninja 00:09:09.425 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:09:10.362 CC lib/ut_mock/mock.o 00:09:10.362 CC lib/ut/ut.o 00:09:10.362 CC lib/log/log.o 00:09:10.362 CC lib/log/log_flags.o 00:09:10.362 CC lib/log/log_deprecated.o 00:09:10.622 LIB libspdk_ut_mock.a 00:09:10.880 SO libspdk_ut_mock.so.6.0 00:09:10.880 LIB libspdk_ut.a 00:09:10.880 LIB libspdk_log.a 00:09:10.880 SO libspdk_ut.so.2.0 00:09:10.880 SO libspdk_log.so.7.0 00:09:10.880 SYMLINK libspdk_ut_mock.so 00:09:10.880 SYMLINK libspdk_ut.so 00:09:10.880 SYMLINK libspdk_log.so 00:09:11.138 CXX lib/trace_parser/trace.o 00:09:11.138 CC lib/util/base64.o 00:09:11.138 CC lib/util/bit_array.o 00:09:11.138 CC lib/util/cpuset.o 00:09:11.138 CC lib/util/crc16.o 00:09:11.138 CC lib/dma/dma.o 00:09:11.138 CC lib/util/crc32.o 00:09:11.138 CC lib/util/crc32c.o 00:09:11.138 CC lib/ioat/ioat.o 00:09:11.138 CC lib/vfio_user/host/vfio_user_pci.o 00:09:11.138 CC lib/vfio_user/host/vfio_user.o 00:09:11.138 CC lib/util/crc32_ieee.o 00:09:11.397 CC lib/util/crc64.o 00:09:11.397 CC lib/util/dif.o 00:09:11.397 LIB libspdk_dma.a 00:09:11.397 SO libspdk_dma.so.4.0 00:09:11.397 CC lib/util/fd.o 00:09:11.397 CC lib/util/fd_group.o 00:09:11.397 CC lib/util/file.o 00:09:11.397 CC lib/util/hexlify.o 00:09:11.397 SYMLINK libspdk_dma.so 00:09:11.397 CC lib/util/iov.o 00:09:11.397 CC lib/util/math.o 00:09:11.397 LIB libspdk_ioat.a 00:09:11.656 SO libspdk_ioat.so.7.0 00:09:11.656 CC lib/util/net.o 00:09:11.656 CC lib/util/pipe.o 00:09:11.656 LIB libspdk_vfio_user.a 00:09:11.656 SYMLINK libspdk_ioat.so 00:09:11.656 CC lib/util/strerror_tls.o 00:09:11.656 SO libspdk_vfio_user.so.5.0 00:09:11.656 CC lib/util/string.o 00:09:11.656 CC lib/util/uuid.o 00:09:11.656 CC lib/util/xor.o 00:09:11.656 SYMLINK libspdk_vfio_user.so 00:09:11.656 CC lib/util/zipf.o 00:09:11.915 LIB libspdk_util.a 00:09:12.173 SO libspdk_util.so.10.0 00:09:12.173 SYMLINK libspdk_util.so 00:09:12.431 LIB libspdk_trace_parser.a 00:09:12.431 SO libspdk_trace_parser.so.5.0 00:09:12.431 CC lib/rdma_utils/rdma_utils.o 00:09:12.431 CC lib/json/json_parse.o 00:09:12.431 CC lib/env_dpdk/env.o 00:09:12.431 CC lib/vmd/vmd.o 00:09:12.431 CC lib/conf/conf.o 00:09:12.431 CC lib/json/json_util.o 00:09:12.431 CC lib/vmd/led.o 00:09:12.431 CC lib/rdma_provider/common.o 00:09:12.431 CC lib/idxd/idxd.o 00:09:12.431 SYMLINK libspdk_trace_parser.so 00:09:12.431 CC lib/rdma_provider/rdma_provider_verbs.o 00:09:12.690 CC lib/env_dpdk/memory.o 00:09:12.690 CC lib/env_dpdk/pci.o 00:09:12.690 LIB libspdk_rdma_provider.a 00:09:12.690 LIB libspdk_conf.a 00:09:12.690 CC lib/json/json_write.o 00:09:12.690 CC lib/env_dpdk/init.o 00:09:12.690 SO libspdk_rdma_provider.so.6.0 00:09:12.690 SO libspdk_conf.so.6.0 00:09:12.690 LIB libspdk_rdma_utils.a 00:09:12.690 SO libspdk_rdma_utils.so.1.0 00:09:12.947 SYMLINK libspdk_rdma_provider.so 00:09:12.947 SYMLINK libspdk_conf.so 00:09:12.947 CC lib/env_dpdk/threads.o 00:09:12.947 CC lib/env_dpdk/pci_ioat.o 00:09:12.947 SYMLINK libspdk_rdma_utils.so 00:09:12.947 CC lib/idxd/idxd_user.o 00:09:12.947 CC lib/env_dpdk/pci_virtio.o 00:09:12.947 CC lib/env_dpdk/pci_vmd.o 00:09:12.947 LIB libspdk_json.a 00:09:13.205 CC lib/env_dpdk/pci_idxd.o 00:09:13.205 SO libspdk_json.so.6.0 00:09:13.205 CC lib/env_dpdk/pci_event.o 00:09:13.205 CC lib/idxd/idxd_kernel.o 00:09:13.205 CC lib/env_dpdk/sigbus_handler.o 00:09:13.205 CC lib/env_dpdk/pci_dpdk.o 00:09:13.205 SYMLINK libspdk_json.so 00:09:13.205 CC lib/env_dpdk/pci_dpdk_2207.o 00:09:13.206 CC lib/env_dpdk/pci_dpdk_2211.o 00:09:13.206 LIB libspdk_vmd.a 00:09:13.206 LIB libspdk_idxd.a 00:09:13.206 SO libspdk_vmd.so.6.0 00:09:13.464 SO libspdk_idxd.so.12.0 00:09:13.464 SYMLINK libspdk_vmd.so 00:09:13.464 CC lib/jsonrpc/jsonrpc_server.o 00:09:13.464 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:09:13.464 CC lib/jsonrpc/jsonrpc_client.o 00:09:13.464 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:09:13.464 SYMLINK libspdk_idxd.so 00:09:13.721 LIB libspdk_jsonrpc.a 00:09:13.721 SO libspdk_jsonrpc.so.6.0 00:09:13.721 SYMLINK libspdk_jsonrpc.so 00:09:14.287 CC lib/rpc/rpc.o 00:09:14.287 LIB libspdk_env_dpdk.a 00:09:14.287 SO libspdk_env_dpdk.so.15.0 00:09:14.287 LIB libspdk_rpc.a 00:09:14.287 SO libspdk_rpc.so.6.0 00:09:14.545 SYMLINK libspdk_rpc.so 00:09:14.545 SYMLINK libspdk_env_dpdk.so 00:09:14.545 CC lib/notify/notify.o 00:09:14.545 CC lib/keyring/keyring.o 00:09:14.545 CC lib/notify/notify_rpc.o 00:09:14.545 CC lib/keyring/keyring_rpc.o 00:09:14.804 CC lib/trace/trace.o 00:09:14.804 CC lib/trace/trace_flags.o 00:09:14.804 CC lib/trace/trace_rpc.o 00:09:14.804 LIB libspdk_notify.a 00:09:14.804 SO libspdk_notify.so.6.0 00:09:15.061 SYMLINK libspdk_notify.so 00:09:15.061 LIB libspdk_keyring.a 00:09:15.062 LIB libspdk_trace.a 00:09:15.062 SO libspdk_keyring.so.1.0 00:09:15.062 SO libspdk_trace.so.10.0 00:09:15.062 SYMLINK libspdk_keyring.so 00:09:15.062 SYMLINK libspdk_trace.so 00:09:15.319 CC lib/sock/sock.o 00:09:15.319 CC lib/sock/sock_rpc.o 00:09:15.319 CC lib/thread/thread.o 00:09:15.319 CC lib/thread/iobuf.o 00:09:15.884 LIB libspdk_sock.a 00:09:16.141 SO libspdk_sock.so.10.0 00:09:16.141 SYMLINK libspdk_sock.so 00:09:16.399 CC lib/nvme/nvme_ctrlr_cmd.o 00:09:16.399 CC lib/nvme/nvme_ctrlr.o 00:09:16.399 CC lib/nvme/nvme_fabric.o 00:09:16.399 CC lib/nvme/nvme_ns_cmd.o 00:09:16.399 CC lib/nvme/nvme_ns.o 00:09:16.399 CC lib/nvme/nvme_pcie.o 00:09:16.399 CC lib/nvme/nvme_pcie_common.o 00:09:16.399 CC lib/nvme/nvme_qpair.o 00:09:16.399 CC lib/nvme/nvme.o 00:09:17.331 CC lib/nvme/nvme_quirks.o 00:09:17.331 CC lib/nvme/nvme_transport.o 00:09:17.331 LIB libspdk_thread.a 00:09:17.331 CC lib/nvme/nvme_discovery.o 00:09:17.331 SO libspdk_thread.so.10.1 00:09:17.331 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:09:17.588 SYMLINK libspdk_thread.so 00:09:17.588 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:09:17.588 CC lib/nvme/nvme_tcp.o 00:09:17.588 CC lib/nvme/nvme_opal.o 00:09:17.588 CC lib/accel/accel.o 00:09:17.846 CC lib/nvme/nvme_io_msg.o 00:09:18.104 CC lib/blob/blobstore.o 00:09:18.104 CC lib/blob/request.o 00:09:18.104 CC lib/init/json_config.o 00:09:18.104 CC lib/virtio/virtio.o 00:09:18.363 CC lib/virtio/virtio_vhost_user.o 00:09:18.363 CC lib/virtio/virtio_vfio_user.o 00:09:18.363 CC lib/virtio/virtio_pci.o 00:09:18.363 CC lib/init/subsystem.o 00:09:18.621 CC lib/blob/zeroes.o 00:09:18.621 CC lib/accel/accel_rpc.o 00:09:18.621 CC lib/accel/accel_sw.o 00:09:18.621 CC lib/nvme/nvme_poll_group.o 00:09:18.621 CC lib/init/subsystem_rpc.o 00:09:18.621 CC lib/blob/blob_bs_dev.o 00:09:18.878 CC lib/nvme/nvme_zns.o 00:09:18.878 LIB libspdk_virtio.a 00:09:18.878 CC lib/init/rpc.o 00:09:18.878 SO libspdk_virtio.so.7.0 00:09:18.878 SYMLINK libspdk_virtio.so 00:09:18.878 CC lib/nvme/nvme_stubs.o 00:09:18.878 LIB libspdk_accel.a 00:09:18.878 CC lib/nvme/nvme_auth.o 00:09:18.878 CC lib/nvme/nvme_cuse.o 00:09:18.878 SO libspdk_accel.so.16.0 00:09:18.878 LIB libspdk_init.a 00:09:19.139 SO libspdk_init.so.5.0 00:09:19.139 SYMLINK libspdk_accel.so 00:09:19.139 SYMLINK libspdk_init.so 00:09:19.139 CC lib/nvme/nvme_rdma.o 00:09:19.403 CC lib/bdev/bdev.o 00:09:19.403 CC lib/bdev/bdev_rpc.o 00:09:19.403 CC lib/bdev/bdev_zone.o 00:09:19.403 CC lib/event/app.o 00:09:19.403 CC lib/event/reactor.o 00:09:19.403 CC lib/event/log_rpc.o 00:09:19.661 CC lib/bdev/part.o 00:09:19.661 CC lib/event/app_rpc.o 00:09:19.661 CC lib/bdev/scsi_nvme.o 00:09:19.920 CC lib/event/scheduler_static.o 00:09:20.178 LIB libspdk_event.a 00:09:20.178 SO libspdk_event.so.14.0 00:09:20.178 SYMLINK libspdk_event.so 00:09:21.114 LIB libspdk_nvme.a 00:09:21.114 SO libspdk_nvme.so.13.1 00:09:21.373 SYMLINK libspdk_nvme.so 00:09:22.307 LIB libspdk_blob.a 00:09:22.307 SO libspdk_blob.so.11.0 00:09:22.307 SYMLINK libspdk_blob.so 00:09:22.565 CC lib/blobfs/blobfs.o 00:09:22.565 CC lib/lvol/lvol.o 00:09:22.565 CC lib/blobfs/tree.o 00:09:22.565 LIB libspdk_bdev.a 00:09:22.835 SO libspdk_bdev.so.16.0 00:09:22.835 SYMLINK libspdk_bdev.so 00:09:23.130 CC lib/nvmf/ctrlr.o 00:09:23.130 CC lib/nvmf/ctrlr_discovery.o 00:09:23.130 CC lib/nvmf/ctrlr_bdev.o 00:09:23.130 CC lib/ublk/ublk.o 00:09:23.130 CC lib/nvmf/subsystem.o 00:09:23.130 CC lib/scsi/dev.o 00:09:23.130 CC lib/ftl/ftl_core.o 00:09:23.130 CC lib/nbd/nbd.o 00:09:23.388 CC lib/scsi/lun.o 00:09:23.646 CC lib/ftl/ftl_init.o 00:09:23.646 CC lib/nbd/nbd_rpc.o 00:09:23.646 CC lib/ftl/ftl_layout.o 00:09:23.905 CC lib/scsi/port.o 00:09:23.905 LIB libspdk_blobfs.a 00:09:23.905 CC lib/scsi/scsi.o 00:09:23.905 LIB libspdk_lvol.a 00:09:23.905 SO libspdk_blobfs.so.10.0 00:09:23.905 LIB libspdk_nbd.a 00:09:23.905 SO libspdk_nbd.so.7.0 00:09:23.905 SO libspdk_lvol.so.10.0 00:09:23.905 SYMLINK libspdk_blobfs.so 00:09:23.905 CC lib/ublk/ublk_rpc.o 00:09:23.905 SYMLINK libspdk_nbd.so 00:09:23.905 CC lib/scsi/scsi_bdev.o 00:09:23.905 CC lib/ftl/ftl_debug.o 00:09:23.905 SYMLINK libspdk_lvol.so 00:09:23.905 CC lib/ftl/ftl_io.o 00:09:23.905 CC lib/ftl/ftl_sb.o 00:09:24.163 CC lib/scsi/scsi_pr.o 00:09:24.163 CC lib/scsi/scsi_rpc.o 00:09:24.163 CC lib/scsi/task.o 00:09:24.163 LIB libspdk_ublk.a 00:09:24.163 CC lib/nvmf/nvmf.o 00:09:24.163 SO libspdk_ublk.so.3.0 00:09:24.163 CC lib/ftl/ftl_l2p.o 00:09:24.163 CC lib/nvmf/nvmf_rpc.o 00:09:24.422 SYMLINK libspdk_ublk.so 00:09:24.422 CC lib/nvmf/transport.o 00:09:24.422 CC lib/nvmf/tcp.o 00:09:24.422 CC lib/ftl/ftl_l2p_flat.o 00:09:24.422 CC lib/ftl/ftl_nv_cache.o 00:09:24.422 CC lib/ftl/ftl_band.o 00:09:24.680 CC lib/ftl/ftl_band_ops.o 00:09:24.680 LIB libspdk_scsi.a 00:09:24.680 SO libspdk_scsi.so.9.0 00:09:24.680 CC lib/nvmf/stubs.o 00:09:24.680 SYMLINK libspdk_scsi.so 00:09:24.680 CC lib/nvmf/mdns_server.o 00:09:24.938 CC lib/nvmf/rdma.o 00:09:24.938 CC lib/nvmf/auth.o 00:09:25.196 CC lib/iscsi/conn.o 00:09:25.196 CC lib/iscsi/init_grp.o 00:09:25.196 CC lib/ftl/ftl_writer.o 00:09:25.454 CC lib/iscsi/iscsi.o 00:09:25.454 CC lib/vhost/vhost.o 00:09:25.454 CC lib/vhost/vhost_rpc.o 00:09:25.454 CC lib/ftl/ftl_rq.o 00:09:25.712 CC lib/iscsi/md5.o 00:09:25.712 CC lib/ftl/ftl_reloc.o 00:09:25.712 CC lib/vhost/vhost_scsi.o 00:09:25.712 CC lib/iscsi/param.o 00:09:25.970 CC lib/iscsi/portal_grp.o 00:09:25.970 CC lib/iscsi/tgt_node.o 00:09:26.228 CC lib/vhost/vhost_blk.o 00:09:26.228 CC lib/ftl/ftl_l2p_cache.o 00:09:26.228 CC lib/vhost/rte_vhost_user.o 00:09:26.228 CC lib/iscsi/iscsi_subsystem.o 00:09:26.228 CC lib/iscsi/iscsi_rpc.o 00:09:26.486 CC lib/iscsi/task.o 00:09:26.486 CC lib/ftl/ftl_p2l.o 00:09:26.757 CC lib/ftl/mngt/ftl_mngt.o 00:09:26.757 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:09:26.757 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:09:26.757 CC lib/ftl/mngt/ftl_mngt_startup.o 00:09:27.016 CC lib/ftl/mngt/ftl_mngt_md.o 00:09:27.016 CC lib/ftl/mngt/ftl_mngt_misc.o 00:09:27.016 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:09:27.016 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:09:27.016 CC lib/ftl/mngt/ftl_mngt_band.o 00:09:27.016 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:09:27.016 LIB libspdk_iscsi.a 00:09:27.274 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:09:27.274 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:09:27.274 SO libspdk_iscsi.so.8.0 00:09:27.274 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:09:27.274 CC lib/ftl/utils/ftl_conf.o 00:09:27.274 CC lib/ftl/utils/ftl_md.o 00:09:27.274 CC lib/ftl/utils/ftl_mempool.o 00:09:27.274 CC lib/ftl/utils/ftl_bitmap.o 00:09:27.532 LIB libspdk_vhost.a 00:09:27.532 CC lib/ftl/utils/ftl_property.o 00:09:27.532 SYMLINK libspdk_iscsi.so 00:09:27.532 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:09:27.532 SO libspdk_vhost.so.8.0 00:09:27.532 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:09:27.532 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:09:27.532 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:09:27.532 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:09:27.532 SYMLINK libspdk_vhost.so 00:09:27.532 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:09:27.790 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:09:27.790 LIB libspdk_nvmf.a 00:09:27.790 CC lib/ftl/upgrade/ftl_sb_v3.o 00:09:27.790 CC lib/ftl/upgrade/ftl_sb_v5.o 00:09:27.790 CC lib/ftl/nvc/ftl_nvc_dev.o 00:09:27.790 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:09:27.790 CC lib/ftl/base/ftl_base_dev.o 00:09:27.790 CC lib/ftl/base/ftl_base_bdev.o 00:09:27.790 SO libspdk_nvmf.so.19.0 00:09:27.790 CC lib/ftl/ftl_trace.o 00:09:28.048 SYMLINK libspdk_nvmf.so 00:09:28.306 LIB libspdk_ftl.a 00:09:28.306 SO libspdk_ftl.so.9.0 00:09:28.873 SYMLINK libspdk_ftl.so 00:09:29.133 CC module/env_dpdk/env_dpdk_rpc.o 00:09:29.133 CC module/keyring/linux/keyring.o 00:09:29.133 CC module/scheduler/dynamic/scheduler_dynamic.o 00:09:29.133 CC module/keyring/file/keyring.o 00:09:29.133 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:09:29.133 CC module/scheduler/gscheduler/gscheduler.o 00:09:29.133 CC module/accel/ioat/accel_ioat.o 00:09:29.133 CC module/blob/bdev/blob_bdev.o 00:09:29.133 CC module/accel/error/accel_error.o 00:09:29.133 CC module/sock/posix/posix.o 00:09:29.392 LIB libspdk_env_dpdk_rpc.a 00:09:29.392 SO libspdk_env_dpdk_rpc.so.6.0 00:09:29.392 CC module/keyring/file/keyring_rpc.o 00:09:29.392 SYMLINK libspdk_env_dpdk_rpc.so 00:09:29.392 CC module/accel/error/accel_error_rpc.o 00:09:29.392 LIB libspdk_scheduler_gscheduler.a 00:09:29.392 CC module/keyring/linux/keyring_rpc.o 00:09:29.392 LIB libspdk_scheduler_dpdk_governor.a 00:09:29.392 SO libspdk_scheduler_gscheduler.so.4.0 00:09:29.392 SO libspdk_scheduler_dpdk_governor.so.4.0 00:09:29.392 LIB libspdk_scheduler_dynamic.a 00:09:29.392 CC module/accel/ioat/accel_ioat_rpc.o 00:09:29.392 SO libspdk_scheduler_dynamic.so.4.0 00:09:29.392 SYMLINK libspdk_scheduler_gscheduler.so 00:09:29.392 SYMLINK libspdk_scheduler_dpdk_governor.so 00:09:29.392 SYMLINK libspdk_scheduler_dynamic.so 00:09:29.392 LIB libspdk_keyring_linux.a 00:09:29.392 LIB libspdk_keyring_file.a 00:09:29.392 LIB libspdk_accel_error.a 00:09:29.651 LIB libspdk_blob_bdev.a 00:09:29.651 SO libspdk_accel_error.so.2.0 00:09:29.651 SO libspdk_keyring_file.so.1.0 00:09:29.651 SO libspdk_keyring_linux.so.1.0 00:09:29.651 LIB libspdk_accel_ioat.a 00:09:29.651 SO libspdk_blob_bdev.so.11.0 00:09:29.651 SO libspdk_accel_ioat.so.6.0 00:09:29.651 SYMLINK libspdk_keyring_file.so 00:09:29.651 SYMLINK libspdk_accel_error.so 00:09:29.651 SYMLINK libspdk_blob_bdev.so 00:09:29.651 CC module/accel/dsa/accel_dsa.o 00:09:29.651 CC module/accel/dsa/accel_dsa_rpc.o 00:09:29.651 CC module/accel/iaa/accel_iaa.o 00:09:29.651 CC module/accel/iaa/accel_iaa_rpc.o 00:09:29.651 SYMLINK libspdk_keyring_linux.so 00:09:29.651 SYMLINK libspdk_accel_ioat.so 00:09:29.909 LIB libspdk_accel_iaa.a 00:09:29.909 CC module/bdev/gpt/gpt.o 00:09:29.909 CC module/bdev/error/vbdev_error.o 00:09:29.909 CC module/bdev/delay/vbdev_delay.o 00:09:29.909 CC module/bdev/lvol/vbdev_lvol.o 00:09:29.909 CC module/blobfs/bdev/blobfs_bdev.o 00:09:29.909 SO libspdk_accel_iaa.so.3.0 00:09:29.909 LIB libspdk_accel_dsa.a 00:09:29.909 CC module/bdev/malloc/bdev_malloc.o 00:09:29.909 CC module/bdev/null/bdev_null.o 00:09:29.909 SYMLINK libspdk_accel_iaa.so 00:09:29.909 CC module/bdev/null/bdev_null_rpc.o 00:09:29.909 SO libspdk_accel_dsa.so.5.0 00:09:30.167 SYMLINK libspdk_accel_dsa.so 00:09:30.167 CC module/bdev/malloc/bdev_malloc_rpc.o 00:09:30.167 CC module/bdev/gpt/vbdev_gpt.o 00:09:30.167 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:09:30.167 LIB libspdk_sock_posix.a 00:09:30.167 SO libspdk_sock_posix.so.6.0 00:09:30.167 CC module/bdev/delay/vbdev_delay_rpc.o 00:09:30.167 SYMLINK libspdk_sock_posix.so 00:09:30.167 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:09:30.167 CC module/bdev/error/vbdev_error_rpc.o 00:09:30.426 LIB libspdk_bdev_null.a 00:09:30.426 LIB libspdk_blobfs_bdev.a 00:09:30.426 SO libspdk_bdev_null.so.6.0 00:09:30.426 SO libspdk_blobfs_bdev.so.6.0 00:09:30.426 LIB libspdk_bdev_delay.a 00:09:30.426 SYMLINK libspdk_bdev_null.so 00:09:30.426 LIB libspdk_bdev_gpt.a 00:09:30.426 SYMLINK libspdk_blobfs_bdev.so 00:09:30.426 SO libspdk_bdev_delay.so.6.0 00:09:30.426 LIB libspdk_bdev_malloc.a 00:09:30.426 CC module/bdev/nvme/bdev_nvme.o 00:09:30.426 SO libspdk_bdev_gpt.so.6.0 00:09:30.426 LIB libspdk_bdev_error.a 00:09:30.426 SO libspdk_bdev_malloc.so.6.0 00:09:30.426 SO libspdk_bdev_error.so.6.0 00:09:30.426 SYMLINK libspdk_bdev_delay.so 00:09:30.426 SYMLINK libspdk_bdev_gpt.so 00:09:30.426 CC module/bdev/passthru/vbdev_passthru.o 00:09:30.684 SYMLINK libspdk_bdev_malloc.so 00:09:30.684 CC module/bdev/nvme/bdev_nvme_rpc.o 00:09:30.684 CC module/bdev/nvme/nvme_rpc.o 00:09:30.684 SYMLINK libspdk_bdev_error.so 00:09:30.684 CC module/bdev/raid/bdev_raid.o 00:09:30.684 CC module/bdev/split/vbdev_split.o 00:09:30.684 CC module/bdev/zone_block/vbdev_zone_block.o 00:09:30.684 CC module/bdev/xnvme/bdev_xnvme.o 00:09:30.684 LIB libspdk_bdev_lvol.a 00:09:30.684 CC module/bdev/aio/bdev_aio.o 00:09:30.684 SO libspdk_bdev_lvol.so.6.0 00:09:30.942 CC module/bdev/nvme/bdev_mdns_client.o 00:09:30.942 CC module/bdev/split/vbdev_split_rpc.o 00:09:30.942 SYMLINK libspdk_bdev_lvol.so 00:09:30.942 CC module/bdev/nvme/vbdev_opal.o 00:09:30.942 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:09:30.942 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:09:30.942 CC module/bdev/aio/bdev_aio_rpc.o 00:09:30.942 LIB libspdk_bdev_split.a 00:09:31.201 LIB libspdk_bdev_passthru.a 00:09:31.201 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:09:31.201 SO libspdk_bdev_split.so.6.0 00:09:31.201 SO libspdk_bdev_passthru.so.6.0 00:09:31.201 SYMLINK libspdk_bdev_split.so 00:09:31.201 LIB libspdk_bdev_xnvme.a 00:09:31.201 CC module/bdev/raid/bdev_raid_rpc.o 00:09:31.201 SYMLINK libspdk_bdev_passthru.so 00:09:31.201 LIB libspdk_bdev_aio.a 00:09:31.201 SO libspdk_bdev_xnvme.so.3.0 00:09:31.201 LIB libspdk_bdev_zone_block.a 00:09:31.201 SO libspdk_bdev_aio.so.6.0 00:09:31.201 SO libspdk_bdev_zone_block.so.6.0 00:09:31.476 SYMLINK libspdk_bdev_xnvme.so 00:09:31.476 CC module/bdev/nvme/vbdev_opal_rpc.o 00:09:31.476 SYMLINK libspdk_bdev_aio.so 00:09:31.476 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:09:31.476 CC module/bdev/ftl/bdev_ftl.o 00:09:31.476 CC module/bdev/raid/bdev_raid_sb.o 00:09:31.476 CC module/bdev/iscsi/bdev_iscsi.o 00:09:31.476 SYMLINK libspdk_bdev_zone_block.so 00:09:31.476 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:09:31.476 CC module/bdev/virtio/bdev_virtio_scsi.o 00:09:31.476 CC module/bdev/raid/raid0.o 00:09:31.476 CC module/bdev/ftl/bdev_ftl_rpc.o 00:09:31.476 CC module/bdev/virtio/bdev_virtio_blk.o 00:09:31.476 CC module/bdev/virtio/bdev_virtio_rpc.o 00:09:31.734 CC module/bdev/raid/raid1.o 00:09:31.734 CC module/bdev/raid/concat.o 00:09:31.734 LIB libspdk_bdev_ftl.a 00:09:31.734 LIB libspdk_bdev_iscsi.a 00:09:31.734 SO libspdk_bdev_ftl.so.6.0 00:09:31.734 SO libspdk_bdev_iscsi.so.6.0 00:09:31.992 SYMLINK libspdk_bdev_ftl.so 00:09:31.992 SYMLINK libspdk_bdev_iscsi.so 00:09:31.992 LIB libspdk_bdev_raid.a 00:09:31.992 SO libspdk_bdev_raid.so.6.0 00:09:31.992 LIB libspdk_bdev_virtio.a 00:09:31.992 SO libspdk_bdev_virtio.so.6.0 00:09:32.251 SYMLINK libspdk_bdev_raid.so 00:09:32.251 SYMLINK libspdk_bdev_virtio.so 00:09:33.185 LIB libspdk_bdev_nvme.a 00:09:33.185 SO libspdk_bdev_nvme.so.7.0 00:09:33.443 SYMLINK libspdk_bdev_nvme.so 00:09:34.009 CC module/event/subsystems/keyring/keyring.o 00:09:34.009 CC module/event/subsystems/sock/sock.o 00:09:34.009 CC module/event/subsystems/scheduler/scheduler.o 00:09:34.009 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:09:34.009 CC module/event/subsystems/vmd/vmd.o 00:09:34.009 CC module/event/subsystems/iobuf/iobuf.o 00:09:34.009 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:09:34.009 CC module/event/subsystems/vmd/vmd_rpc.o 00:09:34.009 LIB libspdk_event_keyring.a 00:09:34.009 LIB libspdk_event_sock.a 00:09:34.009 LIB libspdk_event_vhost_blk.a 00:09:34.009 LIB libspdk_event_vmd.a 00:09:34.009 SO libspdk_event_keyring.so.1.0 00:09:34.009 LIB libspdk_event_iobuf.a 00:09:34.267 LIB libspdk_event_scheduler.a 00:09:34.267 SO libspdk_event_vhost_blk.so.3.0 00:09:34.267 SO libspdk_event_sock.so.5.0 00:09:34.267 SO libspdk_event_vmd.so.6.0 00:09:34.267 SO libspdk_event_scheduler.so.4.0 00:09:34.267 SO libspdk_event_iobuf.so.3.0 00:09:34.267 SYMLINK libspdk_event_keyring.so 00:09:34.267 SYMLINK libspdk_event_sock.so 00:09:34.267 SYMLINK libspdk_event_vhost_blk.so 00:09:34.267 SYMLINK libspdk_event_scheduler.so 00:09:34.267 SYMLINK libspdk_event_vmd.so 00:09:34.267 SYMLINK libspdk_event_iobuf.so 00:09:34.525 CC module/event/subsystems/accel/accel.o 00:09:34.783 LIB libspdk_event_accel.a 00:09:34.783 SO libspdk_event_accel.so.6.0 00:09:34.783 SYMLINK libspdk_event_accel.so 00:09:35.042 CC module/event/subsystems/bdev/bdev.o 00:09:35.300 LIB libspdk_event_bdev.a 00:09:35.300 SO libspdk_event_bdev.so.6.0 00:09:35.300 SYMLINK libspdk_event_bdev.so 00:09:35.558 CC module/event/subsystems/scsi/scsi.o 00:09:35.558 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:09:35.558 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:09:35.558 CC module/event/subsystems/nbd/nbd.o 00:09:35.558 CC module/event/subsystems/ublk/ublk.o 00:09:35.829 LIB libspdk_event_nbd.a 00:09:35.829 LIB libspdk_event_ublk.a 00:09:35.829 LIB libspdk_event_scsi.a 00:09:35.829 SO libspdk_event_nbd.so.6.0 00:09:35.829 SO libspdk_event_ublk.so.3.0 00:09:35.829 SO libspdk_event_scsi.so.6.0 00:09:35.829 SYMLINK libspdk_event_nbd.so 00:09:35.829 SYMLINK libspdk_event_ublk.so 00:09:35.829 SYMLINK libspdk_event_scsi.so 00:09:35.829 LIB libspdk_event_nvmf.a 00:09:36.102 SO libspdk_event_nvmf.so.6.0 00:09:36.102 SYMLINK libspdk_event_nvmf.so 00:09:36.102 CC module/event/subsystems/iscsi/iscsi.o 00:09:36.102 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:09:36.360 LIB libspdk_event_vhost_scsi.a 00:09:36.360 LIB libspdk_event_iscsi.a 00:09:36.360 SO libspdk_event_vhost_scsi.so.3.0 00:09:36.360 SO libspdk_event_iscsi.so.6.0 00:09:36.361 SYMLINK libspdk_event_vhost_scsi.so 00:09:36.361 SYMLINK libspdk_event_iscsi.so 00:09:36.618 SO libspdk.so.6.0 00:09:36.618 SYMLINK libspdk.so 00:09:36.876 CC app/trace_record/trace_record.o 00:09:36.876 CXX app/trace/trace.o 00:09:36.876 CC app/spdk_lspci/spdk_lspci.o 00:09:36.876 CC app/nvmf_tgt/nvmf_main.o 00:09:36.876 CC app/iscsi_tgt/iscsi_tgt.o 00:09:36.876 CC examples/ioat/perf/perf.o 00:09:36.876 CC examples/util/zipf/zipf.o 00:09:36.876 CC app/spdk_tgt/spdk_tgt.o 00:09:36.876 CC test/thread/poller_perf/poller_perf.o 00:09:37.135 CC test/dma/test_dma/test_dma.o 00:09:37.135 LINK spdk_lspci 00:09:37.135 LINK nvmf_tgt 00:09:37.135 LINK zipf 00:09:37.135 LINK poller_perf 00:09:37.135 LINK iscsi_tgt 00:09:37.135 LINK spdk_trace_record 00:09:37.135 LINK spdk_tgt 00:09:37.392 LINK ioat_perf 00:09:37.392 CC app/spdk_nvme_perf/perf.o 00:09:37.392 LINK spdk_trace 00:09:37.392 TEST_HEADER include/spdk/accel.h 00:09:37.392 TEST_HEADER include/spdk/accel_module.h 00:09:37.392 TEST_HEADER include/spdk/assert.h 00:09:37.392 TEST_HEADER include/spdk/barrier.h 00:09:37.392 TEST_HEADER include/spdk/base64.h 00:09:37.392 TEST_HEADER include/spdk/bdev.h 00:09:37.392 TEST_HEADER include/spdk/bdev_module.h 00:09:37.392 TEST_HEADER include/spdk/bdev_zone.h 00:09:37.392 TEST_HEADER include/spdk/bit_array.h 00:09:37.392 TEST_HEADER include/spdk/bit_pool.h 00:09:37.392 TEST_HEADER include/spdk/blob_bdev.h 00:09:37.392 TEST_HEADER include/spdk/blobfs_bdev.h 00:09:37.392 TEST_HEADER include/spdk/blobfs.h 00:09:37.392 TEST_HEADER include/spdk/blob.h 00:09:37.392 TEST_HEADER include/spdk/conf.h 00:09:37.392 TEST_HEADER include/spdk/config.h 00:09:37.392 TEST_HEADER include/spdk/cpuset.h 00:09:37.392 TEST_HEADER include/spdk/crc16.h 00:09:37.392 TEST_HEADER include/spdk/crc32.h 00:09:37.392 TEST_HEADER include/spdk/crc64.h 00:09:37.392 TEST_HEADER include/spdk/dif.h 00:09:37.392 TEST_HEADER include/spdk/dma.h 00:09:37.392 TEST_HEADER include/spdk/endian.h 00:09:37.392 TEST_HEADER include/spdk/env_dpdk.h 00:09:37.392 TEST_HEADER include/spdk/env.h 00:09:37.392 TEST_HEADER include/spdk/event.h 00:09:37.392 TEST_HEADER include/spdk/fd_group.h 00:09:37.392 TEST_HEADER include/spdk/fd.h 00:09:37.392 TEST_HEADER include/spdk/file.h 00:09:37.392 TEST_HEADER include/spdk/ftl.h 00:09:37.392 TEST_HEADER include/spdk/gpt_spec.h 00:09:37.392 TEST_HEADER include/spdk/hexlify.h 00:09:37.392 TEST_HEADER include/spdk/histogram_data.h 00:09:37.392 CC examples/interrupt_tgt/interrupt_tgt.o 00:09:37.392 TEST_HEADER include/spdk/idxd.h 00:09:37.392 TEST_HEADER include/spdk/idxd_spec.h 00:09:37.392 TEST_HEADER include/spdk/init.h 00:09:37.393 TEST_HEADER include/spdk/ioat.h 00:09:37.393 TEST_HEADER include/spdk/ioat_spec.h 00:09:37.393 TEST_HEADER include/spdk/iscsi_spec.h 00:09:37.393 TEST_HEADER include/spdk/json.h 00:09:37.393 TEST_HEADER include/spdk/jsonrpc.h 00:09:37.393 TEST_HEADER include/spdk/keyring.h 00:09:37.651 LINK test_dma 00:09:37.651 TEST_HEADER include/spdk/keyring_module.h 00:09:37.651 TEST_HEADER include/spdk/likely.h 00:09:37.651 TEST_HEADER include/spdk/log.h 00:09:37.651 CC examples/ioat/verify/verify.o 00:09:37.651 TEST_HEADER include/spdk/lvol.h 00:09:37.651 TEST_HEADER include/spdk/memory.h 00:09:37.651 TEST_HEADER include/spdk/mmio.h 00:09:37.651 TEST_HEADER include/spdk/nbd.h 00:09:37.651 CC app/spdk_nvme_identify/identify.o 00:09:37.651 TEST_HEADER include/spdk/net.h 00:09:37.651 TEST_HEADER include/spdk/notify.h 00:09:37.651 CC test/app/bdev_svc/bdev_svc.o 00:09:37.651 TEST_HEADER include/spdk/nvme.h 00:09:37.651 TEST_HEADER include/spdk/nvme_intel.h 00:09:37.651 TEST_HEADER include/spdk/nvme_ocssd.h 00:09:37.651 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:09:37.651 TEST_HEADER include/spdk/nvme_spec.h 00:09:37.651 TEST_HEADER include/spdk/nvme_zns.h 00:09:37.651 TEST_HEADER include/spdk/nvmf_cmd.h 00:09:37.651 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:09:37.651 TEST_HEADER include/spdk/nvmf.h 00:09:37.651 TEST_HEADER include/spdk/nvmf_spec.h 00:09:37.651 TEST_HEADER include/spdk/nvmf_transport.h 00:09:37.651 TEST_HEADER include/spdk/opal.h 00:09:37.651 TEST_HEADER include/spdk/opal_spec.h 00:09:37.651 TEST_HEADER include/spdk/pci_ids.h 00:09:37.651 TEST_HEADER include/spdk/pipe.h 00:09:37.651 TEST_HEADER include/spdk/queue.h 00:09:37.651 TEST_HEADER include/spdk/reduce.h 00:09:37.651 TEST_HEADER include/spdk/rpc.h 00:09:37.651 TEST_HEADER include/spdk/scheduler.h 00:09:37.651 CC examples/sock/hello_world/hello_sock.o 00:09:37.651 TEST_HEADER include/spdk/scsi.h 00:09:37.651 TEST_HEADER include/spdk/scsi_spec.h 00:09:37.651 CC examples/thread/thread/thread_ex.o 00:09:37.651 TEST_HEADER include/spdk/sock.h 00:09:37.651 TEST_HEADER include/spdk/stdinc.h 00:09:37.651 TEST_HEADER include/spdk/string.h 00:09:37.651 TEST_HEADER include/spdk/thread.h 00:09:37.651 TEST_HEADER include/spdk/trace.h 00:09:37.651 TEST_HEADER include/spdk/trace_parser.h 00:09:37.651 TEST_HEADER include/spdk/tree.h 00:09:37.651 TEST_HEADER include/spdk/ublk.h 00:09:37.651 TEST_HEADER include/spdk/util.h 00:09:37.651 TEST_HEADER include/spdk/uuid.h 00:09:37.651 TEST_HEADER include/spdk/version.h 00:09:37.651 TEST_HEADER include/spdk/vfio_user_pci.h 00:09:37.651 TEST_HEADER include/spdk/vfio_user_spec.h 00:09:37.651 TEST_HEADER include/spdk/vhost.h 00:09:37.651 TEST_HEADER include/spdk/vmd.h 00:09:37.651 TEST_HEADER include/spdk/xor.h 00:09:37.651 TEST_HEADER include/spdk/zipf.h 00:09:37.651 CXX test/cpp_headers/accel.o 00:09:37.651 LINK interrupt_tgt 00:09:37.651 LINK bdev_svc 00:09:37.651 CC examples/vmd/lsvmd/lsvmd.o 00:09:37.909 LINK verify 00:09:37.909 CXX test/cpp_headers/accel_module.o 00:09:37.909 LINK hello_sock 00:09:37.909 LINK thread 00:09:37.909 LINK lsvmd 00:09:37.909 CC test/env/vtophys/vtophys.o 00:09:38.167 CC test/env/mem_callbacks/mem_callbacks.o 00:09:38.167 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:09:38.167 CXX test/cpp_headers/assert.o 00:09:38.167 CXX test/cpp_headers/barrier.o 00:09:38.167 CC examples/vmd/led/led.o 00:09:38.167 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:09:38.167 LINK vtophys 00:09:38.167 CC test/app/histogram_perf/histogram_perf.o 00:09:38.167 LINK env_dpdk_post_init 00:09:38.425 CXX test/cpp_headers/base64.o 00:09:38.425 LINK led 00:09:38.425 LINK histogram_perf 00:09:38.425 LINK spdk_nvme_perf 00:09:38.425 CXX test/cpp_headers/bdev.o 00:09:38.425 CC test/event/event_perf/event_perf.o 00:09:38.425 CC test/nvme/aer/aer.o 00:09:38.684 LINK event_perf 00:09:38.684 CC test/rpc_client/rpc_client_test.o 00:09:38.684 CXX test/cpp_headers/bdev_module.o 00:09:38.684 LINK spdk_nvme_identify 00:09:38.684 LINK mem_callbacks 00:09:38.684 CC examples/idxd/perf/perf.o 00:09:38.684 LINK nvme_fuzz 00:09:38.684 CC test/accel/dif/dif.o 00:09:38.684 CC test/blobfs/mkfs/mkfs.o 00:09:38.684 LINK rpc_client_test 00:09:38.942 CC test/event/reactor/reactor.o 00:09:38.942 CXX test/cpp_headers/bdev_zone.o 00:09:38.942 LINK aer 00:09:38.942 CC app/spdk_nvme_discover/discovery_aer.o 00:09:38.942 CC test/env/memory/memory_ut.o 00:09:38.942 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:09:38.942 LINK reactor 00:09:38.942 LINK mkfs 00:09:38.942 CC test/env/pci/pci_ut.o 00:09:38.942 CXX test/cpp_headers/bit_array.o 00:09:38.942 LINK idxd_perf 00:09:39.200 LINK spdk_nvme_discover 00:09:39.200 CC test/nvme/reset/reset.o 00:09:39.200 CXX test/cpp_headers/bit_pool.o 00:09:39.200 CC test/event/reactor_perf/reactor_perf.o 00:09:39.200 LINK dif 00:09:39.200 CC test/nvme/sgl/sgl.o 00:09:39.474 CXX test/cpp_headers/blob_bdev.o 00:09:39.474 CC app/spdk_top/spdk_top.o 00:09:39.474 CC examples/accel/perf/accel_perf.o 00:09:39.474 LINK reactor_perf 00:09:39.474 LINK reset 00:09:39.474 LINK pci_ut 00:09:39.734 CXX test/cpp_headers/blobfs_bdev.o 00:09:39.734 LINK sgl 00:09:39.734 CC test/event/app_repeat/app_repeat.o 00:09:39.734 CC test/lvol/esnap/esnap.o 00:09:39.734 CXX test/cpp_headers/blobfs.o 00:09:39.734 CXX test/cpp_headers/blob.o 00:09:39.992 CC test/nvme/e2edp/nvme_dp.o 00:09:39.992 LINK app_repeat 00:09:39.992 CC test/bdev/bdevio/bdevio.o 00:09:39.992 LINK accel_perf 00:09:39.992 CXX test/cpp_headers/conf.o 00:09:39.992 CC test/nvme/overhead/overhead.o 00:09:40.250 LINK memory_ut 00:09:40.250 CXX test/cpp_headers/config.o 00:09:40.250 LINK nvme_dp 00:09:40.250 CC test/event/scheduler/scheduler.o 00:09:40.250 CXX test/cpp_headers/cpuset.o 00:09:40.509 CC examples/blob/hello_world/hello_blob.o 00:09:40.509 LINK bdevio 00:09:40.509 CXX test/cpp_headers/crc16.o 00:09:40.509 LINK overhead 00:09:40.509 CC test/nvme/err_injection/err_injection.o 00:09:40.509 LINK scheduler 00:09:40.509 CC test/nvme/startup/startup.o 00:09:40.509 LINK spdk_top 00:09:40.509 CXX test/cpp_headers/crc32.o 00:09:40.509 CXX test/cpp_headers/crc64.o 00:09:40.509 CXX test/cpp_headers/dif.o 00:09:40.767 LINK hello_blob 00:09:40.767 LINK err_injection 00:09:40.767 LINK startup 00:09:40.767 CXX test/cpp_headers/dma.o 00:09:40.767 CXX test/cpp_headers/endian.o 00:09:40.767 CC app/vhost/vhost.o 00:09:40.767 CXX test/cpp_headers/env_dpdk.o 00:09:40.767 CC test/app/jsoncat/jsoncat.o 00:09:41.025 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:09:41.025 CC test/nvme/reserve/reserve.o 00:09:41.025 CC examples/blob/cli/blobcli.o 00:09:41.025 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:09:41.025 CC test/nvme/simple_copy/simple_copy.o 00:09:41.025 LINK jsoncat 00:09:41.025 CXX test/cpp_headers/env.o 00:09:41.025 LINK vhost 00:09:41.025 LINK iscsi_fuzz 00:09:41.283 LINK reserve 00:09:41.283 CXX test/cpp_headers/event.o 00:09:41.283 CC examples/nvme/hello_world/hello_world.o 00:09:41.283 LINK simple_copy 00:09:41.283 CC examples/nvme/reconnect/reconnect.o 00:09:41.283 CC app/spdk_dd/spdk_dd.o 00:09:41.283 CXX test/cpp_headers/fd_group.o 00:09:41.283 CXX test/cpp_headers/fd.o 00:09:41.542 LINK vhost_fuzz 00:09:41.542 LINK hello_world 00:09:41.542 CC test/nvme/connect_stress/connect_stress.o 00:09:41.542 LINK blobcli 00:09:41.542 CXX test/cpp_headers/file.o 00:09:41.542 CC app/fio/nvme/fio_plugin.o 00:09:41.800 CXX test/cpp_headers/ftl.o 00:09:41.800 LINK reconnect 00:09:41.800 CC app/fio/bdev/fio_plugin.o 00:09:41.800 LINK connect_stress 00:09:41.800 CC test/app/stub/stub.o 00:09:41.800 LINK spdk_dd 00:09:41.800 CC test/nvme/boot_partition/boot_partition.o 00:09:41.800 CXX test/cpp_headers/gpt_spec.o 00:09:41.800 CC test/nvme/compliance/nvme_compliance.o 00:09:41.800 CXX test/cpp_headers/hexlify.o 00:09:42.058 CC examples/nvme/nvme_manage/nvme_manage.o 00:09:42.058 LINK stub 00:09:42.058 CXX test/cpp_headers/histogram_data.o 00:09:42.058 LINK boot_partition 00:09:42.058 CC test/nvme/fused_ordering/fused_ordering.o 00:09:42.058 CC test/nvme/doorbell_aers/doorbell_aers.o 00:09:42.317 CXX test/cpp_headers/idxd.o 00:09:42.317 CC test/nvme/fdp/fdp.o 00:09:42.317 LINK nvme_compliance 00:09:42.317 CC test/nvme/cuse/cuse.o 00:09:42.317 LINK spdk_bdev 00:09:42.317 LINK spdk_nvme 00:09:42.317 LINK doorbell_aers 00:09:42.317 LINK fused_ordering 00:09:42.317 CXX test/cpp_headers/idxd_spec.o 00:09:42.317 CXX test/cpp_headers/init.o 00:09:42.575 CXX test/cpp_headers/ioat.o 00:09:42.575 CXX test/cpp_headers/ioat_spec.o 00:09:42.575 CXX test/cpp_headers/iscsi_spec.o 00:09:42.575 CXX test/cpp_headers/json.o 00:09:42.575 CXX test/cpp_headers/jsonrpc.o 00:09:42.575 CXX test/cpp_headers/keyring.o 00:09:42.575 LINK nvme_manage 00:09:42.833 LINK fdp 00:09:42.833 CC examples/nvme/arbitration/arbitration.o 00:09:42.833 CC examples/nvme/hotplug/hotplug.o 00:09:42.833 CXX test/cpp_headers/keyring_module.o 00:09:42.833 CC examples/nvme/cmb_copy/cmb_copy.o 00:09:42.833 CXX test/cpp_headers/likely.o 00:09:42.833 CC examples/nvme/abort/abort.o 00:09:42.833 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:09:43.091 CC examples/bdev/hello_world/hello_bdev.o 00:09:43.091 LINK hotplug 00:09:43.091 CXX test/cpp_headers/log.o 00:09:43.091 LINK pmr_persistence 00:09:43.091 LINK cmb_copy 00:09:43.091 LINK arbitration 00:09:43.091 CC examples/bdev/bdevperf/bdevperf.o 00:09:43.091 CXX test/cpp_headers/lvol.o 00:09:43.350 CXX test/cpp_headers/memory.o 00:09:43.350 CXX test/cpp_headers/mmio.o 00:09:43.350 LINK hello_bdev 00:09:43.350 CXX test/cpp_headers/nbd.o 00:09:43.350 CXX test/cpp_headers/net.o 00:09:43.350 CXX test/cpp_headers/notify.o 00:09:43.350 LINK abort 00:09:43.350 CXX test/cpp_headers/nvme.o 00:09:43.350 CXX test/cpp_headers/nvme_intel.o 00:09:43.350 CXX test/cpp_headers/nvme_ocssd.o 00:09:43.350 CXX test/cpp_headers/nvme_ocssd_spec.o 00:09:43.350 CXX test/cpp_headers/nvme_spec.o 00:09:43.608 CXX test/cpp_headers/nvme_zns.o 00:09:43.608 CXX test/cpp_headers/nvmf_cmd.o 00:09:43.608 CXX test/cpp_headers/nvmf_fc_spec.o 00:09:43.608 CXX test/cpp_headers/nvmf.o 00:09:43.608 CXX test/cpp_headers/nvmf_spec.o 00:09:43.608 CXX test/cpp_headers/nvmf_transport.o 00:09:43.608 CXX test/cpp_headers/opal.o 00:09:43.608 CXX test/cpp_headers/opal_spec.o 00:09:43.866 CXX test/cpp_headers/pci_ids.o 00:09:43.866 CXX test/cpp_headers/pipe.o 00:09:43.867 CXX test/cpp_headers/queue.o 00:09:43.867 CXX test/cpp_headers/reduce.o 00:09:43.867 CXX test/cpp_headers/rpc.o 00:09:43.867 LINK cuse 00:09:43.867 CXX test/cpp_headers/scheduler.o 00:09:43.867 CXX test/cpp_headers/scsi.o 00:09:43.867 CXX test/cpp_headers/scsi_spec.o 00:09:43.867 CXX test/cpp_headers/sock.o 00:09:43.867 CXX test/cpp_headers/stdinc.o 00:09:44.124 CXX test/cpp_headers/string.o 00:09:44.124 CXX test/cpp_headers/thread.o 00:09:44.124 CXX test/cpp_headers/trace.o 00:09:44.124 LINK bdevperf 00:09:44.124 CXX test/cpp_headers/trace_parser.o 00:09:44.124 CXX test/cpp_headers/tree.o 00:09:44.124 CXX test/cpp_headers/ublk.o 00:09:44.124 CXX test/cpp_headers/util.o 00:09:44.124 CXX test/cpp_headers/uuid.o 00:09:44.124 CXX test/cpp_headers/version.o 00:09:44.124 CXX test/cpp_headers/vfio_user_pci.o 00:09:44.124 CXX test/cpp_headers/vfio_user_spec.o 00:09:44.124 CXX test/cpp_headers/vhost.o 00:09:44.382 CXX test/cpp_headers/vmd.o 00:09:44.382 CXX test/cpp_headers/xor.o 00:09:44.382 CXX test/cpp_headers/zipf.o 00:09:44.640 CC examples/nvmf/nvmf/nvmf.o 00:09:44.898 LINK nvmf 00:09:46.272 LINK esnap 00:09:46.839 00:09:46.839 real 1m13.692s 00:09:46.839 user 7m15.695s 00:09:46.839 sys 1m38.355s 00:09:46.839 17:10:32 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:09:46.839 17:10:32 make -- common/autotest_common.sh@10 -- $ set +x 00:09:46.839 ************************************ 00:09:46.839 END TEST make 00:09:46.839 ************************************ 00:09:46.839 17:10:32 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:09:46.839 17:10:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:09:46.839 17:10:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:09:46.839 17:10:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:46.839 17:10:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:09:46.839 17:10:32 -- pm/common@44 -- $ pid=5224 00:09:46.839 17:10:32 -- pm/common@50 -- $ kill -TERM 5224 00:09:46.839 17:10:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:46.839 17:10:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:09:46.839 17:10:32 -- pm/common@44 -- $ pid=5226 00:09:46.839 17:10:32 -- pm/common@50 -- $ kill -TERM 5226 00:09:46.839 17:10:33 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:46.839 17:10:33 -- nvmf/common.sh@7 -- # uname -s 00:09:46.839 17:10:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.839 17:10:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.839 17:10:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.839 17:10:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.839 17:10:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.839 17:10:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.839 17:10:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.839 17:10:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.839 17:10:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.839 17:10:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.839 17:10:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7fc73e2f-e911-44c0-81cd-f23b85a0dd5d 00:09:46.839 17:10:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=7fc73e2f-e911-44c0-81cd-f23b85a0dd5d 00:09:46.839 17:10:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.839 17:10:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.839 17:10:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:46.839 17:10:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.839 17:10:33 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:46.839 17:10:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.839 17:10:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.839 17:10:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.839 17:10:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.839 17:10:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.839 17:10:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.839 17:10:33 -- paths/export.sh@5 -- # export PATH 00:09:46.839 17:10:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.839 17:10:33 -- nvmf/common.sh@47 -- # : 0 00:09:46.839 17:10:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:46.839 17:10:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:46.839 17:10:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.839 17:10:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.839 17:10:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.840 17:10:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:46.840 17:10:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:46.840 17:10:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:46.840 17:10:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:09:46.840 17:10:33 -- spdk/autotest.sh@32 -- # uname -s 00:09:46.840 17:10:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:09:46.840 17:10:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:09:46.840 17:10:33 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:09:46.840 17:10:33 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:09:46.840 17:10:33 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:09:46.840 17:10:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:09:47.098 17:10:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:09:47.098 17:10:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:09:47.098 17:10:33 -- spdk/autotest.sh@48 -- # udevadm_pid=53717 00:09:47.098 17:10:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:09:47.098 17:10:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:09:47.098 17:10:33 -- pm/common@17 -- # local monitor 00:09:47.098 17:10:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:47.098 17:10:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:47.098 17:10:33 -- pm/common@25 -- # sleep 1 00:09:47.098 17:10:33 -- pm/common@21 -- # date +%s 00:09:47.098 17:10:33 -- pm/common@21 -- # date +%s 00:09:47.098 17:10:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721841033 00:09:47.098 17:10:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721841033 00:09:47.098 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721841033_collect-vmstat.pm.log 00:09:47.098 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721841033_collect-cpu-load.pm.log 00:09:48.030 17:10:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:09:48.030 17:10:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:09:48.030 17:10:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:48.030 17:10:34 -- common/autotest_common.sh@10 -- # set +x 00:09:48.030 17:10:34 -- spdk/autotest.sh@59 -- # create_test_list 00:09:48.030 17:10:34 -- common/autotest_common.sh@748 -- # xtrace_disable 00:09:48.030 17:10:34 -- common/autotest_common.sh@10 -- # set +x 00:09:48.030 17:10:34 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:09:48.030 17:10:34 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:09:48.030 17:10:34 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:09:48.030 17:10:34 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:09:48.030 17:10:34 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:09:48.030 17:10:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:09:48.030 17:10:34 -- common/autotest_common.sh@1455 -- # uname 00:09:48.030 17:10:34 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:09:48.030 17:10:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:09:48.030 17:10:34 -- common/autotest_common.sh@1475 -- # uname 00:09:48.030 17:10:34 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:09:48.030 17:10:34 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:09:48.030 17:10:34 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:09:48.030 17:10:34 -- spdk/autotest.sh@72 -- # hash lcov 00:09:48.030 17:10:34 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:09:48.030 17:10:34 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:09:48.030 --rc lcov_branch_coverage=1 00:09:48.030 --rc lcov_function_coverage=1 00:09:48.030 --rc genhtml_branch_coverage=1 00:09:48.030 --rc genhtml_function_coverage=1 00:09:48.030 --rc genhtml_legend=1 00:09:48.030 --rc geninfo_all_blocks=1 00:09:48.030 ' 00:09:48.030 17:10:34 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:09:48.030 --rc lcov_branch_coverage=1 00:09:48.030 --rc lcov_function_coverage=1 00:09:48.030 --rc genhtml_branch_coverage=1 00:09:48.030 --rc genhtml_function_coverage=1 00:09:48.030 --rc genhtml_legend=1 00:09:48.030 --rc geninfo_all_blocks=1 00:09:48.030 ' 00:09:48.030 17:10:34 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:09:48.030 --rc lcov_branch_coverage=1 00:09:48.030 --rc lcov_function_coverage=1 00:09:48.030 --rc genhtml_branch_coverage=1 00:09:48.030 --rc genhtml_function_coverage=1 00:09:48.030 --rc genhtml_legend=1 00:09:48.030 --rc geninfo_all_blocks=1 00:09:48.030 --no-external' 00:09:48.030 17:10:34 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:09:48.030 --rc lcov_branch_coverage=1 00:09:48.030 --rc lcov_function_coverage=1 00:09:48.030 --rc genhtml_branch_coverage=1 00:09:48.030 --rc genhtml_function_coverage=1 00:09:48.030 --rc genhtml_legend=1 00:09:48.030 --rc geninfo_all_blocks=1 00:09:48.030 --no-external' 00:09:48.030 17:10:34 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:09:48.030 lcov: LCOV version 1.14 00:09:48.030 17:10:34 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:10:02.903 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:10:02.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:10:15.105 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:10:15.105 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:10:15.105 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:10:15.105 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:10:15.105 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:10:15.105 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:10:15.105 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:10:15.105 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:10:15.105 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:10:15.105 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:10:15.105 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:10:15.105 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:10:15.105 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:10:15.105 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:10:15.105 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:10:15.105 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:10:15.105 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:10:15.105 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:10:15.105 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:10:15.105 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:10:15.105 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:10:15.105 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:10:15.105 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:10:15.105 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:10:15.105 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:10:15.105 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:10:15.105 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:10:15.105 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:10:15.105 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:10:15.105 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:10:15.105 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:10:15.105 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:10:15.105 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:10:15.105 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:10:15.105 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:10:15.105 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:10:15.105 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:10:15.105 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:10:15.106 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:10:15.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:10:15.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:10:15.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:10:15.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:10:15.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:10:15.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:10:15.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:10:15.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:10:15.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:10:15.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:10:15.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:10:15.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:10:15.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:10:15.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:10:15.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:10:15.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:10:15.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:10:15.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:10:15.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:10:15.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:10:15.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:10:15.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:10:15.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:10:15.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:10:15.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:10:18.394 17:11:03 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:10:18.394 17:11:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:18.394 17:11:03 -- common/autotest_common.sh@10 -- # set +x 00:10:18.394 17:11:03 -- spdk/autotest.sh@91 -- # rm -f 00:10:18.394 17:11:03 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:18.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:18.961 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:10:18.961 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:10:18.961 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:10:18.961 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:10:18.961 17:11:05 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:10:18.961 17:11:05 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:10:18.961 17:11:05 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:10:18.961 17:11:05 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:10:18.961 17:11:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:18.961 17:11:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:10:18.961 17:11:05 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:10:18.961 17:11:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:18.962 17:11:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:18.962 17:11:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:18.962 17:11:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:10:18.962 17:11:05 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:10:18.962 17:11:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:18.962 17:11:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:18.962 17:11:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:18.962 17:11:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:10:18.962 17:11:05 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:10:18.962 17:11:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:18.962 17:11:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:18.962 17:11:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:18.962 17:11:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:10:18.962 17:11:05 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:10:18.962 17:11:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:10:18.962 17:11:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:18.962 17:11:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:18.962 17:11:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:10:18.962 17:11:05 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:10:18.962 17:11:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:10:18.962 17:11:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:18.962 17:11:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:18.962 17:11:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:10:18.962 17:11:05 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:10:18.962 17:11:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:10:18.962 17:11:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:18.962 17:11:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:18.962 17:11:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:10:18.962 17:11:05 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:10:18.962 17:11:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:10:18.962 17:11:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:18.962 17:11:05 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:10:18.962 17:11:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:18.962 17:11:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:18.962 17:11:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:10:18.962 17:11:05 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:10:18.962 17:11:05 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:10:18.962 No valid GPT data, bailing 00:10:18.962 17:11:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:10:18.962 17:11:05 -- scripts/common.sh@391 -- # pt= 00:10:18.962 17:11:05 -- scripts/common.sh@392 -- # return 1 00:10:18.962 17:11:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:10:18.962 1+0 records in 00:10:18.962 1+0 records out 00:10:18.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115549 s, 90.7 MB/s 00:10:18.962 17:11:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:18.962 17:11:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:18.962 17:11:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:10:18.962 17:11:05 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:10:18.962 17:11:05 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:10:19.220 No valid GPT data, bailing 00:10:19.220 17:11:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:10:19.220 17:11:05 -- scripts/common.sh@391 -- # pt= 00:10:19.220 17:11:05 -- scripts/common.sh@392 -- # return 1 00:10:19.220 17:11:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:10:19.220 1+0 records in 00:10:19.220 1+0 records out 00:10:19.221 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00353486 s, 297 MB/s 00:10:19.221 17:11:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:19.221 17:11:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:19.221 17:11:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:10:19.221 17:11:05 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:10:19.221 17:11:05 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:10:19.221 No valid GPT data, bailing 00:10:19.221 17:11:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:10:19.221 17:11:05 -- scripts/common.sh@391 -- # pt= 00:10:19.221 17:11:05 -- scripts/common.sh@392 -- # return 1 00:10:19.221 17:11:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:10:19.221 1+0 records in 00:10:19.221 1+0 records out 00:10:19.221 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00514409 s, 204 MB/s 00:10:19.221 17:11:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:19.221 17:11:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:19.221 17:11:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:10:19.221 17:11:05 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:10:19.221 17:11:05 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:10:19.221 No valid GPT data, bailing 00:10:19.221 17:11:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:10:19.221 17:11:05 -- scripts/common.sh@391 -- # pt= 00:10:19.221 17:11:05 -- scripts/common.sh@392 -- # return 1 00:10:19.221 17:11:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:10:19.221 1+0 records in 00:10:19.221 1+0 records out 00:10:19.221 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00441031 s, 238 MB/s 00:10:19.221 17:11:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:19.221 17:11:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:19.221 17:11:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:10:19.221 17:11:05 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:10:19.221 17:11:05 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:10:19.479 No valid GPT data, bailing 00:10:19.479 17:11:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:10:19.479 17:11:05 -- scripts/common.sh@391 -- # pt= 00:10:19.479 17:11:05 -- scripts/common.sh@392 -- # return 1 00:10:19.479 17:11:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:10:19.479 1+0 records in 00:10:19.479 1+0 records out 00:10:19.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504517 s, 208 MB/s 00:10:19.479 17:11:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:19.479 17:11:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:19.479 17:11:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:10:19.479 17:11:05 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:10:19.479 17:11:05 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:10:19.479 No valid GPT data, bailing 00:10:19.479 17:11:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:10:19.479 17:11:05 -- scripts/common.sh@391 -- # pt= 00:10:19.479 17:11:05 -- scripts/common.sh@392 -- # return 1 00:10:19.479 17:11:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:10:19.479 1+0 records in 00:10:19.479 1+0 records out 00:10:19.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050984 s, 206 MB/s 00:10:19.479 17:11:05 -- spdk/autotest.sh@118 -- # sync 00:10:19.479 17:11:05 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:10:19.479 17:11:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:10:19.479 17:11:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:10:21.378 17:11:07 -- spdk/autotest.sh@124 -- # uname -s 00:10:21.378 17:11:07 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:10:21.378 17:11:07 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:10:21.378 17:11:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:21.378 17:11:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.378 17:11:07 -- common/autotest_common.sh@10 -- # set +x 00:10:21.378 ************************************ 00:10:21.378 START TEST setup.sh 00:10:21.378 ************************************ 00:10:21.378 17:11:07 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:10:21.378 * Looking for test storage... 00:10:21.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:21.378 17:11:07 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:10:21.378 17:11:07 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:10:21.378 17:11:07 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:10:21.378 17:11:07 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:21.378 17:11:07 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.378 17:11:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:10:21.378 ************************************ 00:10:21.378 START TEST acl 00:10:21.378 ************************************ 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:10:21.378 * Looking for test storage... 00:10:21.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:21.378 17:11:07 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:21.378 17:11:07 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:21.636 17:11:07 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:10:21.636 17:11:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:10:21.636 17:11:07 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:10:21.636 17:11:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:21.636 17:11:07 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:21.636 17:11:07 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:10:21.636 17:11:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:10:21.636 17:11:07 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:10:21.636 17:11:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:21.636 17:11:07 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:10:21.636 17:11:07 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:10:21.636 17:11:07 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:10:21.636 17:11:07 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:10:21.636 17:11:07 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:10:21.636 17:11:07 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:21.636 17:11:07 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:22.569 17:11:08 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:10:22.569 17:11:08 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:10:22.569 17:11:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:22.569 17:11:08 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:10:22.569 17:11:08 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:10:22.569 17:11:08 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:10:23.134 17:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:10:23.134 17:11:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:10:23.134 17:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:23.699 Hugepages 00:10:23.699 node hugesize free / total 00:10:23.699 17:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:10:23.699 17:11:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:10:23.699 17:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:23.699 00:10:23.699 Type BDF Vendor Device NUMA Driver Device Block devices 00:10:23.699 17:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:10:23.699 17:11:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:10:23.699 17:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:23.699 17:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:10:23.699 17:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:10:23.699 17:11:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:10:23.699 17:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:23.699 17:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:10:23.699 17:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:10:23.699 17:11:09 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:10:23.700 17:11:09 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:10:23.700 17:11:09 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:10:23.700 17:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:23.957 17:11:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:10:23.957 17:11:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:10:23.957 17:11:09 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:10:23.957 17:11:09 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:10:23.957 17:11:09 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:10:23.957 17:11:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:23.957 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:10:23.957 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:10:23.957 17:11:10 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:10:23.957 17:11:10 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:10:23.957 17:11:10 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:10:23.957 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:23.957 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:10:23.957 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:10:23.957 17:11:10 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:10:23.957 17:11:10 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:10:23.957 17:11:10 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:10:23.957 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:23.957 17:11:10 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:10:23.957 17:11:10 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:10:23.957 17:11:10 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:23.957 17:11:10 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.957 17:11:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:10:23.957 ************************************ 00:10:23.957 START TEST denied 00:10:23.957 ************************************ 00:10:23.957 17:11:10 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:10:23.957 17:11:10 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:10:23.957 17:11:10 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:10:23.957 17:11:10 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:10:23.957 17:11:10 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:10:23.957 17:11:10 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:25.329 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:10:25.329 17:11:11 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:10:25.329 17:11:11 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:10:25.329 17:11:11 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:10:25.329 17:11:11 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:10:25.329 17:11:11 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:10:25.329 17:11:11 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:10:25.329 17:11:11 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:10:25.329 17:11:11 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:10:25.329 17:11:11 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:25.329 17:11:11 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:31.944 00:10:31.944 real 0m7.118s 00:10:31.944 user 0m0.825s 00:10:31.944 sys 0m1.337s 00:10:31.944 17:11:17 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.944 ************************************ 00:10:31.944 END TEST denied 00:10:31.944 ************************************ 00:10:31.944 17:11:17 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:10:31.944 17:11:17 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:10:31.944 17:11:17 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:31.944 17:11:17 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:31.944 17:11:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:10:31.944 ************************************ 00:10:31.944 START TEST allowed 00:10:31.944 ************************************ 00:10:31.944 17:11:17 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:10:31.944 17:11:17 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:10:31.944 17:11:17 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:10:31.944 17:11:17 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:10:31.944 17:11:17 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:10:31.944 17:11:17 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:32.202 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:32.202 17:11:18 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:33.579 00:10:33.579 real 0m2.138s 00:10:33.579 user 0m0.973s 00:10:33.579 sys 0m1.154s 00:10:33.579 17:11:19 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.579 17:11:19 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:10:33.579 ************************************ 00:10:33.579 END TEST allowed 00:10:33.579 ************************************ 00:10:33.579 00:10:33.579 real 0m11.973s 00:10:33.579 user 0m3.052s 00:10:33.579 sys 0m3.959s 00:10:33.579 17:11:19 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.579 17:11:19 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:10:33.579 ************************************ 00:10:33.579 END TEST acl 00:10:33.579 ************************************ 00:10:33.579 17:11:19 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:10:33.579 17:11:19 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:33.579 17:11:19 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.579 17:11:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:10:33.579 ************************************ 00:10:33.579 START TEST hugepages 00:10:33.579 ************************************ 00:10:33.579 17:11:19 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:10:33.579 * Looking for test storage... 00:10:33.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5827576 kB' 'MemAvailable: 7411288 kB' 'Buffers: 2436 kB' 'Cached: 1797244 kB' 'SwapCached: 0 kB' 'Active: 444308 kB' 'Inactive: 1457168 kB' 'Active(anon): 112308 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 103752 kB' 'Mapped: 48692 kB' 'Shmem: 10512 kB' 'KReclaimable: 63412 kB' 'Slab: 135804 kB' 'SReclaimable: 63412 kB' 'SUnreclaim: 72392 kB' 'KernelStack: 6316 kB' 'PageTables: 3916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 335572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.579 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.580 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:10:33.581 17:11:19 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:10:33.581 17:11:19 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:33.581 17:11:19 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.581 17:11:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:33.581 ************************************ 00:10:33.581 START TEST default_setup 00:10:33.581 ************************************ 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:10:33.581 17:11:19 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:34.147 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:34.714 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.714 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.714 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.714 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.714 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7937276 kB' 'MemAvailable: 9520708 kB' 'Buffers: 2436 kB' 'Cached: 1797232 kB' 'SwapCached: 0 kB' 'Active: 463816 kB' 'Inactive: 1457180 kB' 'Active(anon): 131816 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 122752 kB' 'Mapped: 48876 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135104 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72272 kB' 'KernelStack: 6368 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.977 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7937028 kB' 'MemAvailable: 9520460 kB' 'Buffers: 2436 kB' 'Cached: 1797232 kB' 'SwapCached: 0 kB' 'Active: 463116 kB' 'Inactive: 1457180 kB' 'Active(anon): 131116 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121992 kB' 'Mapped: 48868 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135108 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72276 kB' 'KernelStack: 6336 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.978 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.979 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7937028 kB' 'MemAvailable: 9520460 kB' 'Buffers: 2436 kB' 'Cached: 1797232 kB' 'SwapCached: 0 kB' 'Active: 462840 kB' 'Inactive: 1457180 kB' 'Active(anon): 130840 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121952 kB' 'Mapped: 48868 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135108 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72276 kB' 'KernelStack: 6320 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:34.980 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.980 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.980 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.980 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.980 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.980 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.981 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:10:34.982 nr_hugepages=1024 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:34.982 resv_hugepages=0 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:34.982 surplus_hugepages=0 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:34.982 anon_hugepages=0 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.982 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7937028 kB' 'MemAvailable: 9520460 kB' 'Buffers: 2436 kB' 'Cached: 1797232 kB' 'SwapCached: 0 kB' 'Active: 462792 kB' 'Inactive: 1457180 kB' 'Active(anon): 130792 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121908 kB' 'Mapped: 48868 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135108 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72276 kB' 'KernelStack: 6304 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.983 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7937028 kB' 'MemUsed: 4304944 kB' 'SwapCached: 0 kB' 'Active: 462744 kB' 'Inactive: 1457180 kB' 'Active(anon): 130744 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'FilePages: 1799668 kB' 'Mapped: 48868 kB' 'AnonPages: 121860 kB' 'Shmem: 10472 kB' 'KernelStack: 6356 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62832 kB' 'Slab: 135108 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.984 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.985 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:34.986 node0=1024 expecting 1024 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:10:34.986 00:10:34.986 real 0m1.379s 00:10:34.986 user 0m0.611s 00:10:34.986 sys 0m0.754s 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.986 17:11:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:10:34.986 ************************************ 00:10:34.986 END TEST default_setup 00:10:34.986 ************************************ 00:10:34.986 17:11:21 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:10:34.986 17:11:21 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:34.986 17:11:21 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.986 17:11:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:34.986 ************************************ 00:10:34.986 START TEST per_node_1G_alloc 00:10:34.986 ************************************ 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:34.986 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:35.245 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:35.507 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:35.507 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:35.507 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:35.507 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8987604 kB' 'MemAvailable: 10571052 kB' 'Buffers: 2436 kB' 'Cached: 1797232 kB' 'SwapCached: 0 kB' 'Active: 463396 kB' 'Inactive: 1457196 kB' 'Active(anon): 131396 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 122456 kB' 'Mapped: 48772 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135132 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72300 kB' 'KernelStack: 6384 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.507 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.508 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8987696 kB' 'MemAvailable: 10571144 kB' 'Buffers: 2436 kB' 'Cached: 1797232 kB' 'SwapCached: 0 kB' 'Active: 463112 kB' 'Inactive: 1457196 kB' 'Active(anon): 131112 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122248 kB' 'Mapped: 48692 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135168 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72336 kB' 'KernelStack: 6336 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.509 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.510 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8987696 kB' 'MemAvailable: 10571144 kB' 'Buffers: 2436 kB' 'Cached: 1797232 kB' 'SwapCached: 0 kB' 'Active: 463116 kB' 'Inactive: 1457196 kB' 'Active(anon): 131116 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122264 kB' 'Mapped: 48692 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135168 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72336 kB' 'KernelStack: 6336 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.511 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.512 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:10:35.513 nr_hugepages=512 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:10:35.513 resv_hugepages=0 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:35.513 surplus_hugepages=0 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:35.513 anon_hugepages=0 00:10:35.513 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8987696 kB' 'MemAvailable: 10571144 kB' 'Buffers: 2436 kB' 'Cached: 1797232 kB' 'SwapCached: 0 kB' 'Active: 463104 kB' 'Inactive: 1457196 kB' 'Active(anon): 131104 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122252 kB' 'Mapped: 48692 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135168 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72336 kB' 'KernelStack: 6336 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.773 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.774 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8987696 kB' 'MemUsed: 3254276 kB' 'SwapCached: 0 kB' 'Active: 462868 kB' 'Inactive: 1457196 kB' 'Active(anon): 130868 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 1799668 kB' 'Mapped: 48692 kB' 'AnonPages: 122328 kB' 'Shmem: 10472 kB' 'KernelStack: 6352 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62832 kB' 'Slab: 135168 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.775 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:35.776 node0=512 expecting 512 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:10:35.776 00:10:35.776 real 0m0.684s 00:10:35.776 user 0m0.331s 00:10:35.776 sys 0m0.400s 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.776 17:11:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:10:35.776 ************************************ 00:10:35.776 END TEST per_node_1G_alloc 00:10:35.776 ************************************ 00:10:35.776 17:11:21 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:10:35.776 17:11:21 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:35.776 17:11:21 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.776 17:11:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:35.776 ************************************ 00:10:35.776 START TEST even_2G_alloc 00:10:35.776 ************************************ 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:10:35.776 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:35.777 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:10:35.777 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:10:35.777 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:10:35.777 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:35.777 17:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:36.043 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:36.333 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:36.333 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:36.333 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:36.333 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:36.333 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:10:36.333 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:10:36.333 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:10:36.333 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:10:36.333 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:10:36.333 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7932392 kB' 'MemAvailable: 9515840 kB' 'Buffers: 2436 kB' 'Cached: 1797232 kB' 'SwapCached: 0 kB' 'Active: 463312 kB' 'Inactive: 1457196 kB' 'Active(anon): 131312 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 122764 kB' 'Mapped: 48816 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135136 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72304 kB' 'KernelStack: 6308 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.334 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7932392 kB' 'MemAvailable: 9515840 kB' 'Buffers: 2436 kB' 'Cached: 1797232 kB' 'SwapCached: 0 kB' 'Active: 462896 kB' 'Inactive: 1457196 kB' 'Active(anon): 130896 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 122312 kB' 'Mapped: 48696 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135140 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72308 kB' 'KernelStack: 6352 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.335 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.336 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:36.337 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7932392 kB' 'MemAvailable: 9515840 kB' 'Buffers: 2436 kB' 'Cached: 1797232 kB' 'SwapCached: 0 kB' 'Active: 462980 kB' 'Inactive: 1457196 kB' 'Active(anon): 130980 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 122336 kB' 'Mapped: 48696 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135136 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72304 kB' 'KernelStack: 6336 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.338 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:36.339 nr_hugepages=1024 00:10:36.339 resv_hugepages=0 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:36.339 surplus_hugepages=0 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:36.339 anon_hugepages=0 00:10:36.339 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7932392 kB' 'MemAvailable: 9515840 kB' 'Buffers: 2436 kB' 'Cached: 1797232 kB' 'SwapCached: 0 kB' 'Active: 462972 kB' 'Inactive: 1457196 kB' 'Active(anon): 130972 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 122328 kB' 'Mapped: 48696 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135136 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72304 kB' 'KernelStack: 6320 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.340 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:10:36.341 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.601 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7932392 kB' 'MemUsed: 4309580 kB' 'SwapCached: 0 kB' 'Active: 462916 kB' 'Inactive: 1457196 kB' 'Active(anon): 130916 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 1799668 kB' 'Mapped: 48696 kB' 'AnonPages: 122056 kB' 'Shmem: 10472 kB' 'KernelStack: 6352 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62832 kB' 'Slab: 135136 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.602 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:36.603 node0=1024 expecting 1024 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:10:36.603 00:10:36.603 real 0m0.742s 00:10:36.603 user 0m0.344s 00:10:36.603 sys 0m0.416s 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.603 17:11:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:10:36.603 ************************************ 00:10:36.603 END TEST even_2G_alloc 00:10:36.603 ************************************ 00:10:36.603 17:11:22 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:10:36.603 17:11:22 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:36.603 17:11:22 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.603 17:11:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:36.603 ************************************ 00:10:36.603 START TEST odd_alloc 00:10:36.603 ************************************ 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:36.603 17:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:36.861 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:37.125 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:37.125 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:37.125 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:37.125 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7929344 kB' 'MemAvailable: 9512792 kB' 'Buffers: 2436 kB' 'Cached: 1797232 kB' 'SwapCached: 0 kB' 'Active: 462976 kB' 'Inactive: 1457196 kB' 'Active(anon): 130976 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122324 kB' 'Mapped: 48888 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135080 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72248 kB' 'KernelStack: 6304 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 359868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.125 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7929344 kB' 'MemAvailable: 9512792 kB' 'Buffers: 2436 kB' 'Cached: 1797232 kB' 'SwapCached: 0 kB' 'Active: 462900 kB' 'Inactive: 1457196 kB' 'Active(anon): 130900 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122340 kB' 'Mapped: 48696 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135092 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72260 kB' 'KernelStack: 6352 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 359868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.126 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.127 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7929344 kB' 'MemAvailable: 9512792 kB' 'Buffers: 2436 kB' 'Cached: 1797232 kB' 'SwapCached: 0 kB' 'Active: 462896 kB' 'Inactive: 1457196 kB' 'Active(anon): 130896 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122020 kB' 'Mapped: 48696 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135092 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72260 kB' 'KernelStack: 6336 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 359868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.128 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.129 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:37.130 nr_hugepages=1025 00:10:37.130 resv_hugepages=0 00:10:37.130 surplus_hugepages=0 00:10:37.130 anon_hugepages=0 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7929092 kB' 'MemAvailable: 9512540 kB' 'Buffers: 2436 kB' 'Cached: 1797232 kB' 'SwapCached: 0 kB' 'Active: 463168 kB' 'Inactive: 1457196 kB' 'Active(anon): 131168 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122292 kB' 'Mapped: 48696 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135092 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72260 kB' 'KernelStack: 6336 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 359868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.130 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.131 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7929092 kB' 'MemUsed: 4312880 kB' 'SwapCached: 0 kB' 'Active: 463208 kB' 'Inactive: 1457196 kB' 'Active(anon): 131208 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1799668 kB' 'Mapped: 48696 kB' 'AnonPages: 122348 kB' 'Shmem: 10472 kB' 'KernelStack: 6352 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62832 kB' 'Slab: 135092 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.132 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.133 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:37.391 node0=1025 expecting 1025 00:10:37.391 ************************************ 00:10:37.391 END TEST odd_alloc 00:10:37.391 ************************************ 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:10:37.391 00:10:37.391 real 0m0.723s 00:10:37.391 user 0m0.320s 00:10:37.391 sys 0m0.403s 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.391 17:11:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:10:37.391 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:10:37.391 17:11:23 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:37.391 17:11:23 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.391 17:11:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:37.391 ************************************ 00:10:37.391 START TEST custom_alloc 00:10:37.391 ************************************ 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:10:37.391 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:37.392 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:37.392 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:37.392 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:37.392 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:10:37.392 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:10:37.392 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:10:37.392 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:10:37.392 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:10:37.392 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:10:37.392 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:37.392 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:37.649 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:37.912 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:37.912 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:37.912 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:37.912 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8971692 kB' 'MemAvailable: 10555144 kB' 'Buffers: 2436 kB' 'Cached: 1797236 kB' 'SwapCached: 0 kB' 'Active: 463784 kB' 'Inactive: 1457200 kB' 'Active(anon): 131784 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122612 kB' 'Mapped: 48864 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135052 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72220 kB' 'KernelStack: 6336 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.912 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.913 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8971692 kB' 'MemAvailable: 10555144 kB' 'Buffers: 2436 kB' 'Cached: 1797236 kB' 'SwapCached: 0 kB' 'Active: 463472 kB' 'Inactive: 1457200 kB' 'Active(anon): 131472 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122392 kB' 'Mapped: 48876 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135128 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72296 kB' 'KernelStack: 6360 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.914 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:10:37.915 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8971692 kB' 'MemAvailable: 10555144 kB' 'Buffers: 2436 kB' 'Cached: 1797236 kB' 'SwapCached: 0 kB' 'Active: 463544 kB' 'Inactive: 1457200 kB' 'Active(anon): 131544 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122404 kB' 'Mapped: 48876 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135124 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72292 kB' 'KernelStack: 6344 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.916 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.917 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:10:37.918 nr_hugepages=512 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:10:37.918 resv_hugepages=0 00:10:37.918 surplus_hugepages=0 00:10:37.918 anon_hugepages=0 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8971692 kB' 'MemAvailable: 10555144 kB' 'Buffers: 2436 kB' 'Cached: 1797236 kB' 'SwapCached: 0 kB' 'Active: 463508 kB' 'Inactive: 1457200 kB' 'Active(anon): 131508 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122400 kB' 'Mapped: 48876 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 135124 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72292 kB' 'KernelStack: 6376 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.918 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:37.919 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8971952 kB' 'MemUsed: 3270020 kB' 'SwapCached: 0 kB' 'Active: 463524 kB' 'Inactive: 1457200 kB' 'Active(anon): 131524 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1799672 kB' 'Mapped: 48876 kB' 'AnonPages: 122392 kB' 'Shmem: 10472 kB' 'KernelStack: 6376 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62832 kB' 'Slab: 135124 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:37.920 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.179 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:38.180 node0=512 expecting 512 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:10:38.180 00:10:38.180 real 0m0.749s 00:10:38.180 user 0m0.350s 00:10:38.180 sys 0m0.396s 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:38.180 17:11:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:10:38.180 ************************************ 00:10:38.180 END TEST custom_alloc 00:10:38.180 ************************************ 00:10:38.180 17:11:24 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:10:38.180 17:11:24 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:38.180 17:11:24 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.180 17:11:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:38.180 ************************************ 00:10:38.180 START TEST no_shrink_alloc 00:10:38.180 ************************************ 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:38.180 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:38.438 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:38.702 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:38.702 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:38.702 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:38.702 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7929292 kB' 'MemAvailable: 9512744 kB' 'Buffers: 2436 kB' 'Cached: 1797236 kB' 'SwapCached: 0 kB' 'Active: 459744 kB' 'Inactive: 1457200 kB' 'Active(anon): 127744 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118632 kB' 'Mapped: 48304 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 134968 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72136 kB' 'KernelStack: 6280 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.702 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:38.703 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7929044 kB' 'MemAvailable: 9512496 kB' 'Buffers: 2436 kB' 'Cached: 1797236 kB' 'SwapCached: 0 kB' 'Active: 459292 kB' 'Inactive: 1457200 kB' 'Active(anon): 127292 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118484 kB' 'Mapped: 47976 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 134964 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72132 kB' 'KernelStack: 6232 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.704 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:38.705 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7929044 kB' 'MemAvailable: 9512496 kB' 'Buffers: 2436 kB' 'Cached: 1797236 kB' 'SwapCached: 0 kB' 'Active: 459480 kB' 'Inactive: 1457200 kB' 'Active(anon): 127480 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118676 kB' 'Mapped: 48036 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 134964 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72132 kB' 'KernelStack: 6264 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.706 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.707 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:10:38.708 nr_hugepages=1024 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:38.708 resv_hugepages=0 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:38.708 surplus_hugepages=0 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:38.708 anon_hugepages=0 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7929200 kB' 'MemAvailable: 9512652 kB' 'Buffers: 2436 kB' 'Cached: 1797236 kB' 'SwapCached: 0 kB' 'Active: 459304 kB' 'Inactive: 1457200 kB' 'Active(anon): 127304 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118764 kB' 'Mapped: 48096 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 134960 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72128 kB' 'KernelStack: 6296 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.708 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.709 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7929200 kB' 'MemUsed: 4312772 kB' 'SwapCached: 0 kB' 'Active: 459340 kB' 'Inactive: 1457200 kB' 'Active(anon): 127340 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1799672 kB' 'Mapped: 48096 kB' 'AnonPages: 118540 kB' 'Shmem: 10472 kB' 'KernelStack: 6312 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62832 kB' 'Slab: 134960 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.710 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:38.711 node0=1024 expecting 1024 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:10:38.711 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:10:38.712 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:10:38.712 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:38.712 17:11:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:39.286 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:39.286 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:39.286 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:39.286 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:39.286 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:39.286 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7943828 kB' 'MemAvailable: 9527280 kB' 'Buffers: 2436 kB' 'Cached: 1797236 kB' 'SwapCached: 0 kB' 'Active: 460180 kB' 'Inactive: 1457200 kB' 'Active(anon): 128180 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 119068 kB' 'Mapped: 48232 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 134896 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72064 kB' 'KernelStack: 6308 kB' 'PageTables: 4000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.286 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:39.287 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7944208 kB' 'MemAvailable: 9527660 kB' 'Buffers: 2436 kB' 'Cached: 1797236 kB' 'SwapCached: 0 kB' 'Active: 459548 kB' 'Inactive: 1457200 kB' 'Active(anon): 127548 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118652 kB' 'Mapped: 47952 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 134892 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72060 kB' 'KernelStack: 6272 kB' 'PageTables: 3808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.288 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.289 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7944208 kB' 'MemAvailable: 9527660 kB' 'Buffers: 2436 kB' 'Cached: 1797236 kB' 'SwapCached: 0 kB' 'Active: 459112 kB' 'Inactive: 1457200 kB' 'Active(anon): 127112 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118216 kB' 'Mapped: 47960 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 134892 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72060 kB' 'KernelStack: 6240 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.290 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.291 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:10:39.292 nr_hugepages=1024 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:39.292 resv_hugepages=0 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:39.292 surplus_hugepages=0 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:39.292 anon_hugepages=0 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7944468 kB' 'MemAvailable: 9527920 kB' 'Buffers: 2436 kB' 'Cached: 1797236 kB' 'SwapCached: 0 kB' 'Active: 459372 kB' 'Inactive: 1457200 kB' 'Active(anon): 127372 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118476 kB' 'Mapped: 47960 kB' 'Shmem: 10472 kB' 'KReclaimable: 62832 kB' 'Slab: 134892 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72060 kB' 'KernelStack: 6240 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.292 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.293 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:39.294 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7944260 kB' 'MemUsed: 4297712 kB' 'SwapCached: 0 kB' 'Active: 459068 kB' 'Inactive: 1457200 kB' 'Active(anon): 127068 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1457200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1799672 kB' 'Mapped: 47960 kB' 'AnonPages: 118468 kB' 'Shmem: 10472 kB' 'KernelStack: 6224 kB' 'PageTables: 3680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62832 kB' 'Slab: 134892 kB' 'SReclaimable: 62832 kB' 'SUnreclaim: 72060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.554 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:39.555 node0=1024 expecting 1024 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:10:39.555 00:10:39.555 real 0m1.341s 00:10:39.555 user 0m0.647s 00:10:39.555 sys 0m0.786s 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.555 17:11:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:10:39.555 ************************************ 00:10:39.555 END TEST no_shrink_alloc 00:10:39.556 ************************************ 00:10:39.556 17:11:25 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:10:39.556 17:11:25 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:10:39.556 17:11:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:10:39.556 17:11:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:39.556 17:11:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:10:39.556 17:11:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:39.556 17:11:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:10:39.556 17:11:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:10:39.556 17:11:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:10:39.556 00:10:39.556 real 0m6.047s 00:10:39.556 user 0m2.758s 00:10:39.556 sys 0m3.411s 00:10:39.556 17:11:25 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.556 ************************************ 00:10:39.556 END TEST hugepages 00:10:39.556 17:11:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:39.556 ************************************ 00:10:39.556 17:11:25 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:10:39.556 17:11:25 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:39.556 17:11:25 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.556 17:11:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:10:39.556 ************************************ 00:10:39.556 START TEST driver 00:10:39.556 ************************************ 00:10:39.556 17:11:25 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:10:39.556 * Looking for test storage... 00:10:39.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:39.556 17:11:25 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:10:39.556 17:11:25 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:39.556 17:11:25 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:46.114 17:11:31 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:10:46.114 17:11:31 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:46.114 17:11:31 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.114 17:11:31 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:10:46.114 ************************************ 00:10:46.114 START TEST guess_driver 00:10:46.114 ************************************ 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:10:46.114 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:10:46.114 Looking for driver=uio_pci_generic 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:10:46.114 17:11:31 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:46.114 17:11:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:10:46.114 17:11:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:10:46.114 17:11:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:46.680 17:11:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:46.680 17:11:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:10:46.680 17:11:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:46.680 17:11:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:46.680 17:11:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:10:46.680 17:11:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:46.680 17:11:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:46.680 17:11:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:10:46.680 17:11:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:46.680 17:11:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:46.680 17:11:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:10:46.680 17:11:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:46.680 17:11:32 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:10:46.680 17:11:32 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:10:46.680 17:11:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:46.680 17:11:32 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:53.256 00:10:53.256 real 0m7.151s 00:10:53.256 user 0m0.791s 00:10:53.256 sys 0m1.436s 00:10:53.256 17:11:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.256 17:11:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:10:53.256 ************************************ 00:10:53.256 END TEST guess_driver 00:10:53.256 ************************************ 00:10:53.256 00:10:53.256 real 0m13.172s 00:10:53.256 user 0m1.118s 00:10:53.256 sys 0m2.227s 00:10:53.256 17:11:38 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.256 17:11:38 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:10:53.256 ************************************ 00:10:53.256 END TEST driver 00:10:53.256 ************************************ 00:10:53.256 17:11:38 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:10:53.256 17:11:38 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:53.256 17:11:38 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.256 17:11:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:10:53.256 ************************************ 00:10:53.256 START TEST devices 00:10:53.256 ************************************ 00:10:53.256 17:11:38 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:10:53.256 * Looking for test storage... 00:10:53.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:53.256 17:11:38 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:10:53.256 17:11:38 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:10:53.256 17:11:38 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:53.256 17:11:38 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:53.823 17:11:40 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:10:53.823 17:11:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:10:53.823 17:11:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:10:53.823 17:11:40 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:10:53.823 17:11:40 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:53.823 17:11:40 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:10:53.823 17:11:40 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:10:53.823 17:11:40 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:53.823 17:11:40 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:53.823 17:11:40 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:53.823 17:11:40 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:10:53.823 17:11:40 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:10:53.824 17:11:40 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:53.824 17:11:40 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:10:53.824 17:11:40 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:10:53.824 17:11:40 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:10:53.824 17:11:40 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:10:53.824 17:11:40 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:10:53.824 17:11:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:10:53.824 17:11:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:10:53.824 17:11:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:10:53.824 17:11:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:10:53.824 17:11:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:10:53.824 17:11:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:10:53.824 17:11:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:10:53.824 17:11:40 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:10:54.083 No valid GPT data, bailing 00:10:54.083 17:11:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:10:54.083 17:11:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:10:54.083 17:11:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:10:54.083 17:11:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:54.083 17:11:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:54.083 17:11:40 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:10:54.083 17:11:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:10:54.083 17:11:40 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:10:54.083 No valid GPT data, bailing 00:10:54.083 17:11:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:10:54.083 17:11:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:10:54.083 17:11:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:10:54.083 17:11:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:10:54.083 17:11:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:10:54.083 17:11:40 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:10:54.083 17:11:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:10:54.083 17:11:40 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:10:54.083 No valid GPT data, bailing 00:10:54.083 17:11:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:10:54.083 17:11:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:10:54.083 17:11:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:10:54.083 17:11:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:10:54.083 17:11:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:10:54.083 17:11:40 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:10:54.083 17:11:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:10:54.083 17:11:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:10:54.083 17:11:40 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:10:54.083 No valid GPT data, bailing 00:10:54.340 17:11:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:10:54.340 17:11:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:10:54.340 17:11:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:10:54.340 17:11:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:10:54.340 17:11:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:10:54.340 17:11:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:10:54.340 17:11:40 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:10:54.340 17:11:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:10:54.340 17:11:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:10:54.341 17:11:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:10:54.341 17:11:40 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:10:54.341 No valid GPT data, bailing 00:10:54.341 17:11:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:10:54.341 17:11:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:10:54.341 17:11:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:10:54.341 17:11:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:10:54.341 17:11:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:10:54.341 17:11:40 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:10:54.341 17:11:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:10:54.341 17:11:40 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:10:54.341 No valid GPT data, bailing 00:10:54.341 17:11:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:10:54.341 17:11:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:10:54.341 17:11:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:10:54.341 17:11:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:10:54.341 17:11:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:10:54.341 17:11:40 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:10:54.341 17:11:40 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:10:54.341 17:11:40 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:54.341 17:11:40 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.341 17:11:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:10:54.341 ************************************ 00:10:54.341 START TEST nvme_mount 00:10:54.341 ************************************ 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:10:54.341 17:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:10:55.716 Creating new GPT entries in memory. 00:10:55.716 GPT data structures destroyed! You may now partition the disk using fdisk or 00:10:55.716 other utilities. 00:10:55.716 17:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:10:55.716 17:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:10:55.716 17:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:10:55.716 17:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:10:55.716 17:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:10:56.653 Creating new GPT entries in memory. 00:10:56.653 The operation has completed successfully. 00:10:56.653 17:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:10:56.653 17:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:10:56.653 17:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59420 00:10:56.653 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:56.653 17:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:10:56.653 17:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:56.653 17:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:10:56.653 17:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:10:56.653 17:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:56.653 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:56.653 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:10:56.653 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:10:56.653 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:56.653 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:56.653 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:10:56.653 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:10:56.654 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:10:56.654 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:10:56.654 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:56.654 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:10:56.654 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:10:56.654 17:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:10:56.654 17:11:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:56.654 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:56.654 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:10:56.654 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:10:56.654 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:56.654 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:56.654 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:56.912 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:56.912 17:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:56.912 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:56.912 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:56.912 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:56.912 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:57.170 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:57.170 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:57.428 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:10:57.428 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:10:57.428 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:57.428 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:10:57.428 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:57.428 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:10:57.428 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:57.428 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:57.428 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:10:57.428 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:10:57.428 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:10:57.428 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:10:57.428 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:10:57.685 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:57.685 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:10:57.685 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:57.685 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:10:57.685 17:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:57.943 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:57.943 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:10:57.943 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:10:57.943 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:57.943 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:57.943 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:58.202 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:58.202 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:58.202 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:58.202 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:58.202 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:58.202 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:58.459 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:58.459 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:58.716 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:10:58.716 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:10:58.716 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:58.716 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:10:58.716 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:58.716 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:58.716 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:10:58.716 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:10:58.716 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:10:58.716 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:10:58.716 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:10:58.716 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:10:58.716 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:10:58.716 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:10:58.716 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:58.716 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:10:58.716 17:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:10:58.717 17:11:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:10:58.717 17:11:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:58.974 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:58.974 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:10:58.974 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:10:58.974 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:58.974 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:58.974 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:59.232 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:59.232 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:59.232 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:59.232 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:59.232 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:59.232 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:59.490 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:59.490 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:59.765 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:10:59.765 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:10:59.765 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:10:59.765 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:10:59.765 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:59.765 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:10:59.765 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:10:59.765 17:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:10:59.765 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:10:59.765 00:10:59.765 real 0m5.331s 00:10:59.765 user 0m1.432s 00:10:59.765 sys 0m1.584s 00:10:59.765 17:11:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.765 17:11:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:10:59.765 ************************************ 00:10:59.765 END TEST nvme_mount 00:10:59.765 ************************************ 00:10:59.765 17:11:45 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:10:59.765 17:11:45 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:59.765 17:11:45 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.765 17:11:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:10:59.765 ************************************ 00:10:59.765 START TEST dm_mount 00:10:59.765 ************************************ 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:10:59.765 17:11:45 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:11:00.750 Creating new GPT entries in memory. 00:11:00.750 GPT data structures destroyed! You may now partition the disk using fdisk or 00:11:00.750 other utilities. 00:11:00.750 17:11:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:11:00.750 17:11:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:00.750 17:11:46 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:00.750 17:11:46 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:00.750 17:11:46 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:11:01.685 Creating new GPT entries in memory. 00:11:01.685 The operation has completed successfully. 00:11:01.685 17:11:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:01.685 17:11:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:01.685 17:11:47 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:01.685 17:11:47 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:01.685 17:11:47 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:11:03.059 The operation has completed successfully. 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60053 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:03.059 17:11:48 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:03.059 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:03.317 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:03.317 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:03.317 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:03.317 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:03.317 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:03.317 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:03.574 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:03.574 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:03.831 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:03.831 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:11:03.831 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:03.831 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:11:03.831 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:03.831 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:03.831 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:11:03.831 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:03.831 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:11:03.832 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:11:03.832 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:11:03.832 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:11:03.832 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:11:03.832 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:11:03.832 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:03.832 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:03.832 17:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:11:03.832 17:11:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:03.832 17:11:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:04.088 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.088 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:11:04.088 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:11:04.088 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:04.088 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.088 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:04.088 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.088 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:04.345 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.345 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:04.345 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.345 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:04.631 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.631 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:04.909 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:04.909 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:11:04.909 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:11:04.909 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:11:04.909 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:04.909 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:11:04.909 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:11:04.909 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:04.909 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:11:04.909 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:04.909 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:11:04.909 17:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:11:04.909 00:11:04.909 real 0m5.057s 00:11:04.909 user 0m0.955s 00:11:04.909 sys 0m1.044s 00:11:04.909 17:11:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.909 17:11:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:11:04.909 ************************************ 00:11:04.909 END TEST dm_mount 00:11:04.909 ************************************ 00:11:04.909 17:11:50 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:11:04.909 17:11:50 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:11:04.909 17:11:50 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:04.909 17:11:50 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:04.909 17:11:50 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:11:04.909 17:11:50 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:04.909 17:11:50 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:05.167 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:05.167 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:05.167 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:05.167 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:05.167 17:11:51 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:11:05.167 17:11:51 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:05.167 17:11:51 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:11:05.167 17:11:51 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:05.167 17:11:51 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:11:05.167 17:11:51 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:11:05.167 17:11:51 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:11:05.167 ************************************ 00:11:05.167 END TEST devices 00:11:05.167 ************************************ 00:11:05.167 00:11:05.167 real 0m12.382s 00:11:05.167 user 0m3.280s 00:11:05.167 sys 0m3.428s 00:11:05.167 17:11:51 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.167 17:11:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:11:05.167 00:11:05.167 real 0m43.856s 00:11:05.167 user 0m10.295s 00:11:05.167 sys 0m13.206s 00:11:05.167 17:11:51 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.167 17:11:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:05.167 ************************************ 00:11:05.167 END TEST setup.sh 00:11:05.167 ************************************ 00:11:05.167 17:11:51 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:05.733 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:06.299 Hugepages 00:11:06.299 node hugesize free / total 00:11:06.299 node0 1048576kB 0 / 0 00:11:06.299 node0 2048kB 2048 / 2048 00:11:06.299 00:11:06.299 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:06.299 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:11:06.299 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:11:06.299 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:11:06.558 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:11:06.558 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:11:06.558 17:11:52 -- spdk/autotest.sh@130 -- # uname -s 00:11:06.558 17:11:52 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:11:06.558 17:11:52 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:11:06.558 17:11:52 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:07.123 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:07.689 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:07.689 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:07.689 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:07.689 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:07.689 17:11:53 -- common/autotest_common.sh@1532 -- # sleep 1 00:11:08.623 17:11:54 -- common/autotest_common.sh@1533 -- # bdfs=() 00:11:08.623 17:11:54 -- common/autotest_common.sh@1533 -- # local bdfs 00:11:08.623 17:11:54 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:11:08.623 17:11:54 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:11:08.623 17:11:54 -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:08.623 17:11:54 -- common/autotest_common.sh@1513 -- # local bdfs 00:11:08.623 17:11:54 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:08.623 17:11:54 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:08.623 17:11:54 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:08.882 17:11:54 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:08.882 17:11:54 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:08.882 17:11:54 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:09.141 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:09.399 Waiting for block devices as requested 00:11:09.399 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.399 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.399 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.657 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:14.920 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:14.920 17:12:00 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:11:14.920 17:12:00 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:11:14.920 17:12:00 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:11:14.920 17:12:00 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:11:14.920 17:12:00 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:14.920 17:12:00 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:11:14.920 17:12:00 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:14.920 17:12:00 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:11:14.920 17:12:00 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:11:14.920 17:12:00 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:11:14.920 17:12:00 -- common/autotest_common.sh@1545 -- # grep oacs 00:11:14.920 17:12:00 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:11:14.920 17:12:00 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:11:14.920 17:12:00 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:11:14.920 17:12:00 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:11:14.920 17:12:00 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:11:14.920 17:12:00 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:11:14.920 17:12:00 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:11:14.920 17:12:00 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:11:14.920 17:12:00 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:11:14.920 17:12:00 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:11:14.920 17:12:00 -- common/autotest_common.sh@1557 -- # continue 00:11:14.920 17:12:00 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:11:14.920 17:12:00 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:11:14.920 17:12:00 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:11:14.920 17:12:00 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:11:14.920 17:12:00 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:14.920 17:12:00 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:11:14.920 17:12:00 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:14.920 17:12:00 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:11:14.920 17:12:00 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:11:14.920 17:12:00 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:11:14.920 17:12:00 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:11:14.920 17:12:00 -- common/autotest_common.sh@1545 -- # grep oacs 00:11:14.920 17:12:00 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:11:14.920 17:12:00 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:11:14.920 17:12:00 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:11:14.920 17:12:00 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:11:14.920 17:12:00 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:11:14.920 17:12:00 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:11:14.920 17:12:00 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:11:14.920 17:12:00 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:11:14.920 17:12:00 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:11:14.920 17:12:00 -- common/autotest_common.sh@1557 -- # continue 00:11:14.920 17:12:00 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:11:14.920 17:12:00 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:11:14.920 17:12:00 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:11:14.920 17:12:00 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:11:14.920 17:12:00 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:11:14.920 17:12:00 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:11:14.920 17:12:00 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:11:14.920 17:12:00 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:11:14.920 17:12:00 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:11:14.920 17:12:00 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:11:14.920 17:12:00 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:11:14.920 17:12:00 -- common/autotest_common.sh@1545 -- # grep oacs 00:11:14.920 17:12:00 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:11:14.920 17:12:00 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:11:14.920 17:12:00 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:11:14.920 17:12:00 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:11:14.920 17:12:00 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:11:14.920 17:12:00 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:11:14.920 17:12:00 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:11:14.920 17:12:00 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:11:14.920 17:12:00 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:11:14.920 17:12:00 -- common/autotest_common.sh@1557 -- # continue 00:11:14.920 17:12:00 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:11:14.920 17:12:00 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:11:14.920 17:12:00 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:11:14.920 17:12:00 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:11:14.920 17:12:00 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:11:14.920 17:12:00 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:11:14.920 17:12:00 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:11:14.920 17:12:00 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:11:14.920 17:12:00 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:11:14.920 17:12:00 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:11:14.920 17:12:00 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:11:14.920 17:12:00 -- common/autotest_common.sh@1545 -- # grep oacs 00:11:14.920 17:12:00 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:11:14.920 17:12:00 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:11:14.920 17:12:00 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:11:14.921 17:12:00 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:11:14.921 17:12:00 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:11:14.921 17:12:00 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:11:14.921 17:12:00 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:11:14.921 17:12:00 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:11:14.921 17:12:00 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:11:14.921 17:12:00 -- common/autotest_common.sh@1557 -- # continue 00:11:14.921 17:12:00 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:11:14.921 17:12:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:14.921 17:12:00 -- common/autotest_common.sh@10 -- # set +x 00:11:14.921 17:12:00 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:11:14.921 17:12:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:14.921 17:12:00 -- common/autotest_common.sh@10 -- # set +x 00:11:14.921 17:12:00 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:15.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:15.744 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:15.744 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.002 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.002 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.002 17:12:02 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:11:16.002 17:12:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:16.002 17:12:02 -- common/autotest_common.sh@10 -- # set +x 00:11:16.002 17:12:02 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:11:16.002 17:12:02 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:11:16.002 17:12:02 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:11:16.002 17:12:02 -- common/autotest_common.sh@1577 -- # bdfs=() 00:11:16.002 17:12:02 -- common/autotest_common.sh@1577 -- # local bdfs 00:11:16.002 17:12:02 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:11:16.002 17:12:02 -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:16.002 17:12:02 -- common/autotest_common.sh@1513 -- # local bdfs 00:11:16.002 17:12:02 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:16.002 17:12:02 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:16.002 17:12:02 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:16.002 17:12:02 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:16.002 17:12:02 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:16.002 17:12:02 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:11:16.002 17:12:02 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:11:16.002 17:12:02 -- common/autotest_common.sh@1580 -- # device=0x0010 00:11:16.002 17:12:02 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:16.002 17:12:02 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:11:16.002 17:12:02 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:11:16.002 17:12:02 -- common/autotest_common.sh@1580 -- # device=0x0010 00:11:16.002 17:12:02 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:16.002 17:12:02 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:11:16.002 17:12:02 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:11:16.002 17:12:02 -- common/autotest_common.sh@1580 -- # device=0x0010 00:11:16.002 17:12:02 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:16.002 17:12:02 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:11:16.002 17:12:02 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:11:16.003 17:12:02 -- common/autotest_common.sh@1580 -- # device=0x0010 00:11:16.003 17:12:02 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:16.003 17:12:02 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:11:16.003 17:12:02 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:11:16.003 17:12:02 -- common/autotest_common.sh@1593 -- # return 0 00:11:16.003 17:12:02 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:11:16.003 17:12:02 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:11:16.003 17:12:02 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:11:16.003 17:12:02 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:11:16.003 17:12:02 -- spdk/autotest.sh@162 -- # timing_enter lib 00:11:16.003 17:12:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:16.003 17:12:02 -- common/autotest_common.sh@10 -- # set +x 00:11:16.003 17:12:02 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:11:16.003 17:12:02 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:16.003 17:12:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:16.003 17:12:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.003 17:12:02 -- common/autotest_common.sh@10 -- # set +x 00:11:16.003 ************************************ 00:11:16.003 START TEST env 00:11:16.003 ************************************ 00:11:16.003 17:12:02 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:16.277 * Looking for test storage... 00:11:16.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:11:16.277 17:12:02 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:16.277 17:12:02 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:16.277 17:12:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.277 17:12:02 env -- common/autotest_common.sh@10 -- # set +x 00:11:16.277 ************************************ 00:11:16.277 START TEST env_memory 00:11:16.277 ************************************ 00:11:16.277 17:12:02 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:16.277 00:11:16.277 00:11:16.277 CUnit - A unit testing framework for C - Version 2.1-3 00:11:16.277 http://cunit.sourceforge.net/ 00:11:16.277 00:11:16.277 00:11:16.277 Suite: memory 00:11:16.277 Test: alloc and free memory map ...[2024-07-24 17:12:02.389965] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:11:16.277 passed 00:11:16.277 Test: mem map translation ...[2024-07-24 17:12:02.450869] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:11:16.277 [2024-07-24 17:12:02.450978] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:11:16.277 [2024-07-24 17:12:02.451079] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:11:16.277 [2024-07-24 17:12:02.451115] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:11:16.582 passed 00:11:16.582 Test: mem map registration ...[2024-07-24 17:12:02.551284] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:11:16.582 [2024-07-24 17:12:02.551390] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:11:16.582 passed 00:11:16.582 Test: mem map adjacent registrations ...passed 00:11:16.582 00:11:16.582 Run Summary: Type Total Ran Passed Failed Inactive 00:11:16.582 suites 1 1 n/a 0 0 00:11:16.582 tests 4 4 4 0 0 00:11:16.582 asserts 152 152 152 0 n/a 00:11:16.582 00:11:16.582 Elapsed time = 0.346 seconds 00:11:16.582 00:11:16.582 real 0m0.385s 00:11:16.582 user 0m0.360s 00:11:16.582 sys 0m0.020s 00:11:16.582 17:12:02 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.582 ************************************ 00:11:16.582 END TEST env_memory 00:11:16.582 ************************************ 00:11:16.582 17:12:02 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:11:16.582 17:12:02 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:16.582 17:12:02 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:16.582 17:12:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.582 17:12:02 env -- common/autotest_common.sh@10 -- # set +x 00:11:16.582 ************************************ 00:11:16.582 START TEST env_vtophys 00:11:16.582 ************************************ 00:11:16.582 17:12:02 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:16.582 EAL: lib.eal log level changed from notice to debug 00:11:16.582 EAL: Detected lcore 0 as core 0 on socket 0 00:11:16.582 EAL: Detected lcore 1 as core 0 on socket 0 00:11:16.582 EAL: Detected lcore 2 as core 0 on socket 0 00:11:16.582 EAL: Detected lcore 3 as core 0 on socket 0 00:11:16.582 EAL: Detected lcore 4 as core 0 on socket 0 00:11:16.582 EAL: Detected lcore 5 as core 0 on socket 0 00:11:16.582 EAL: Detected lcore 6 as core 0 on socket 0 00:11:16.582 EAL: Detected lcore 7 as core 0 on socket 0 00:11:16.582 EAL: Detected lcore 8 as core 0 on socket 0 00:11:16.582 EAL: Detected lcore 9 as core 0 on socket 0 00:11:16.582 EAL: Maximum logical cores by configuration: 128 00:11:16.582 EAL: Detected CPU lcores: 10 00:11:16.582 EAL: Detected NUMA nodes: 1 00:11:16.582 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:11:16.582 EAL: Detected shared linkage of DPDK 00:11:16.840 EAL: No shared files mode enabled, IPC will be disabled 00:11:16.840 EAL: Selected IOVA mode 'PA' 00:11:16.840 EAL: Probing VFIO support... 00:11:16.840 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:16.840 EAL: VFIO modules not loaded, skipping VFIO support... 00:11:16.840 EAL: Ask a virtual area of 0x2e000 bytes 00:11:16.840 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:11:16.840 EAL: Setting up physically contiguous memory... 00:11:16.840 EAL: Setting maximum number of open files to 524288 00:11:16.840 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:11:16.840 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:11:16.840 EAL: Ask a virtual area of 0x61000 bytes 00:11:16.840 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:11:16.840 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:16.840 EAL: Ask a virtual area of 0x400000000 bytes 00:11:16.840 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:11:16.840 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:11:16.840 EAL: Ask a virtual area of 0x61000 bytes 00:11:16.840 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:11:16.840 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:16.840 EAL: Ask a virtual area of 0x400000000 bytes 00:11:16.840 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:11:16.840 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:11:16.840 EAL: Ask a virtual area of 0x61000 bytes 00:11:16.840 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:11:16.840 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:16.840 EAL: Ask a virtual area of 0x400000000 bytes 00:11:16.840 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:11:16.840 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:11:16.840 EAL: Ask a virtual area of 0x61000 bytes 00:11:16.840 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:11:16.840 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:16.840 EAL: Ask a virtual area of 0x400000000 bytes 00:11:16.840 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:11:16.840 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:11:16.840 EAL: Hugepages will be freed exactly as allocated. 00:11:16.840 EAL: No shared files mode enabled, IPC is disabled 00:11:16.840 EAL: No shared files mode enabled, IPC is disabled 00:11:16.840 EAL: TSC frequency is ~2200000 KHz 00:11:16.840 EAL: Main lcore 0 is ready (tid=7f903fe11a40;cpuset=[0]) 00:11:16.840 EAL: Trying to obtain current memory policy. 00:11:16.840 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:16.840 EAL: Restoring previous memory policy: 0 00:11:16.840 EAL: request: mp_malloc_sync 00:11:16.840 EAL: No shared files mode enabled, IPC is disabled 00:11:16.840 EAL: Heap on socket 0 was expanded by 2MB 00:11:16.840 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:16.840 EAL: No PCI address specified using 'addr=' in: bus=pci 00:11:16.840 EAL: Mem event callback 'spdk:(nil)' registered 00:11:16.840 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:11:16.840 00:11:16.840 00:11:16.840 CUnit - A unit testing framework for C - Version 2.1-3 00:11:16.840 http://cunit.sourceforge.net/ 00:11:16.840 00:11:16.840 00:11:16.840 Suite: components_suite 00:11:17.406 Test: vtophys_malloc_test ...passed 00:11:17.406 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:11:17.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:17.406 EAL: Restoring previous memory policy: 4 00:11:17.406 EAL: Calling mem event callback 'spdk:(nil)' 00:11:17.406 EAL: request: mp_malloc_sync 00:11:17.406 EAL: No shared files mode enabled, IPC is disabled 00:11:17.406 EAL: Heap on socket 0 was expanded by 4MB 00:11:17.406 EAL: Calling mem event callback 'spdk:(nil)' 00:11:17.406 EAL: request: mp_malloc_sync 00:11:17.406 EAL: No shared files mode enabled, IPC is disabled 00:11:17.406 EAL: Heap on socket 0 was shrunk by 4MB 00:11:17.406 EAL: Trying to obtain current memory policy. 00:11:17.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:17.406 EAL: Restoring previous memory policy: 4 00:11:17.406 EAL: Calling mem event callback 'spdk:(nil)' 00:11:17.406 EAL: request: mp_malloc_sync 00:11:17.406 EAL: No shared files mode enabled, IPC is disabled 00:11:17.406 EAL: Heap on socket 0 was expanded by 6MB 00:11:17.406 EAL: Calling mem event callback 'spdk:(nil)' 00:11:17.406 EAL: request: mp_malloc_sync 00:11:17.406 EAL: No shared files mode enabled, IPC is disabled 00:11:17.406 EAL: Heap on socket 0 was shrunk by 6MB 00:11:17.406 EAL: Trying to obtain current memory policy. 00:11:17.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:17.406 EAL: Restoring previous memory policy: 4 00:11:17.406 EAL: Calling mem event callback 'spdk:(nil)' 00:11:17.406 EAL: request: mp_malloc_sync 00:11:17.406 EAL: No shared files mode enabled, IPC is disabled 00:11:17.406 EAL: Heap on socket 0 was expanded by 10MB 00:11:17.406 EAL: Calling mem event callback 'spdk:(nil)' 00:11:17.406 EAL: request: mp_malloc_sync 00:11:17.406 EAL: No shared files mode enabled, IPC is disabled 00:11:17.406 EAL: Heap on socket 0 was shrunk by 10MB 00:11:17.406 EAL: Trying to obtain current memory policy. 00:11:17.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:17.406 EAL: Restoring previous memory policy: 4 00:11:17.406 EAL: Calling mem event callback 'spdk:(nil)' 00:11:17.406 EAL: request: mp_malloc_sync 00:11:17.406 EAL: No shared files mode enabled, IPC is disabled 00:11:17.406 EAL: Heap on socket 0 was expanded by 18MB 00:11:17.406 EAL: Calling mem event callback 'spdk:(nil)' 00:11:17.406 EAL: request: mp_malloc_sync 00:11:17.406 EAL: No shared files mode enabled, IPC is disabled 00:11:17.407 EAL: Heap on socket 0 was shrunk by 18MB 00:11:17.407 EAL: Trying to obtain current memory policy. 00:11:17.407 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:17.407 EAL: Restoring previous memory policy: 4 00:11:17.407 EAL: Calling mem event callback 'spdk:(nil)' 00:11:17.407 EAL: request: mp_malloc_sync 00:11:17.407 EAL: No shared files mode enabled, IPC is disabled 00:11:17.407 EAL: Heap on socket 0 was expanded by 34MB 00:11:17.407 EAL: Calling mem event callback 'spdk:(nil)' 00:11:17.407 EAL: request: mp_malloc_sync 00:11:17.407 EAL: No shared files mode enabled, IPC is disabled 00:11:17.407 EAL: Heap on socket 0 was shrunk by 34MB 00:11:17.665 EAL: Trying to obtain current memory policy. 00:11:17.665 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:17.665 EAL: Restoring previous memory policy: 4 00:11:17.665 EAL: Calling mem event callback 'spdk:(nil)' 00:11:17.665 EAL: request: mp_malloc_sync 00:11:17.665 EAL: No shared files mode enabled, IPC is disabled 00:11:17.665 EAL: Heap on socket 0 was expanded by 66MB 00:11:17.665 EAL: Calling mem event callback 'spdk:(nil)' 00:11:17.665 EAL: request: mp_malloc_sync 00:11:17.665 EAL: No shared files mode enabled, IPC is disabled 00:11:17.665 EAL: Heap on socket 0 was shrunk by 66MB 00:11:17.665 EAL: Trying to obtain current memory policy. 00:11:17.665 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:17.923 EAL: Restoring previous memory policy: 4 00:11:17.923 EAL: Calling mem event callback 'spdk:(nil)' 00:11:17.923 EAL: request: mp_malloc_sync 00:11:17.923 EAL: No shared files mode enabled, IPC is disabled 00:11:17.923 EAL: Heap on socket 0 was expanded by 130MB 00:11:17.923 EAL: Calling mem event callback 'spdk:(nil)' 00:11:17.923 EAL: request: mp_malloc_sync 00:11:17.923 EAL: No shared files mode enabled, IPC is disabled 00:11:17.923 EAL: Heap on socket 0 was shrunk by 130MB 00:11:18.182 EAL: Trying to obtain current memory policy. 00:11:18.182 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:18.182 EAL: Restoring previous memory policy: 4 00:11:18.182 EAL: Calling mem event callback 'spdk:(nil)' 00:11:18.182 EAL: request: mp_malloc_sync 00:11:18.182 EAL: No shared files mode enabled, IPC is disabled 00:11:18.182 EAL: Heap on socket 0 was expanded by 258MB 00:11:18.748 EAL: Calling mem event callback 'spdk:(nil)' 00:11:18.748 EAL: request: mp_malloc_sync 00:11:18.748 EAL: No shared files mode enabled, IPC is disabled 00:11:18.748 EAL: Heap on socket 0 was shrunk by 258MB 00:11:19.008 EAL: Trying to obtain current memory policy. 00:11:19.008 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:19.266 EAL: Restoring previous memory policy: 4 00:11:19.266 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.266 EAL: request: mp_malloc_sync 00:11:19.266 EAL: No shared files mode enabled, IPC is disabled 00:11:19.266 EAL: Heap on socket 0 was expanded by 514MB 00:11:20.202 EAL: Calling mem event callback 'spdk:(nil)' 00:11:20.202 EAL: request: mp_malloc_sync 00:11:20.202 EAL: No shared files mode enabled, IPC is disabled 00:11:20.202 EAL: Heap on socket 0 was shrunk by 514MB 00:11:20.767 EAL: Trying to obtain current memory policy. 00:11:20.767 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:21.024 EAL: Restoring previous memory policy: 4 00:11:21.024 EAL: Calling mem event callback 'spdk:(nil)' 00:11:21.024 EAL: request: mp_malloc_sync 00:11:21.024 EAL: No shared files mode enabled, IPC is disabled 00:11:21.024 EAL: Heap on socket 0 was expanded by 1026MB 00:11:22.925 EAL: Calling mem event callback 'spdk:(nil)' 00:11:22.925 EAL: request: mp_malloc_sync 00:11:22.925 EAL: No shared files mode enabled, IPC is disabled 00:11:22.925 EAL: Heap on socket 0 was shrunk by 1026MB 00:11:24.299 passed 00:11:24.299 00:11:24.299 Run Summary: Type Total Ran Passed Failed Inactive 00:11:24.299 suites 1 1 n/a 0 0 00:11:24.299 tests 2 2 2 0 0 00:11:24.299 asserts 5264 5264 5264 0 n/a 00:11:24.299 00:11:24.299 Elapsed time = 7.400 seconds 00:11:24.299 EAL: Calling mem event callback 'spdk:(nil)' 00:11:24.299 EAL: request: mp_malloc_sync 00:11:24.299 EAL: No shared files mode enabled, IPC is disabled 00:11:24.299 EAL: Heap on socket 0 was shrunk by 2MB 00:11:24.299 EAL: No shared files mode enabled, IPC is disabled 00:11:24.299 EAL: No shared files mode enabled, IPC is disabled 00:11:24.299 EAL: No shared files mode enabled, IPC is disabled 00:11:24.299 00:11:24.299 real 0m7.702s 00:11:24.299 user 0m6.506s 00:11:24.299 sys 0m1.033s 00:11:24.299 17:12:10 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:24.299 17:12:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:11:24.299 ************************************ 00:11:24.299 END TEST env_vtophys 00:11:24.299 ************************************ 00:11:24.299 17:12:10 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:24.299 17:12:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:24.299 17:12:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:24.299 17:12:10 env -- common/autotest_common.sh@10 -- # set +x 00:11:24.299 ************************************ 00:11:24.299 START TEST env_pci 00:11:24.299 ************************************ 00:11:24.299 17:12:10 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:24.597 00:11:24.597 00:11:24.597 CUnit - A unit testing framework for C - Version 2.1-3 00:11:24.597 http://cunit.sourceforge.net/ 00:11:24.597 00:11:24.597 00:11:24.597 Suite: pci 00:11:24.597 Test: pci_hook ...[2024-07-24 17:12:10.540597] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 61886 has claimed it 00:11:24.597 EAL: Cannot find device (10000:00:01.0) 00:11:24.597 EAL: Failed to attach device on primary process 00:11:24.597 passed 00:11:24.597 00:11:24.597 Run Summary: Type Total Ran Passed Failed Inactive 00:11:24.597 suites 1 1 n/a 0 0 00:11:24.597 tests 1 1 1 0 0 00:11:24.597 asserts 25 25 25 0 n/a 00:11:24.597 00:11:24.597 Elapsed time = 0.008 seconds 00:11:24.597 00:11:24.597 real 0m0.082s 00:11:24.597 user 0m0.031s 00:11:24.597 sys 0m0.050s 00:11:24.597 17:12:10 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:24.597 17:12:10 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:11:24.597 ************************************ 00:11:24.597 END TEST env_pci 00:11:24.597 ************************************ 00:11:24.597 17:12:10 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:11:24.597 17:12:10 env -- env/env.sh@15 -- # uname 00:11:24.597 17:12:10 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:11:24.597 17:12:10 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:11:24.597 17:12:10 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:24.597 17:12:10 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:24.597 17:12:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:24.597 17:12:10 env -- common/autotest_common.sh@10 -- # set +x 00:11:24.597 ************************************ 00:11:24.597 START TEST env_dpdk_post_init 00:11:24.597 ************************************ 00:11:24.597 17:12:10 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:24.597 EAL: Detected CPU lcores: 10 00:11:24.597 EAL: Detected NUMA nodes: 1 00:11:24.597 EAL: Detected shared linkage of DPDK 00:11:24.597 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:24.597 EAL: Selected IOVA mode 'PA' 00:11:24.877 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:24.877 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:11:24.877 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:11:24.877 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:11:24.877 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:11:24.877 Starting DPDK initialization... 00:11:24.877 Starting SPDK post initialization... 00:11:24.877 SPDK NVMe probe 00:11:24.877 Attaching to 0000:00:10.0 00:11:24.877 Attaching to 0000:00:11.0 00:11:24.877 Attaching to 0000:00:12.0 00:11:24.877 Attaching to 0000:00:13.0 00:11:24.877 Attached to 0000:00:10.0 00:11:24.877 Attached to 0000:00:11.0 00:11:24.877 Attached to 0000:00:13.0 00:11:24.877 Attached to 0000:00:12.0 00:11:24.877 Cleaning up... 00:11:24.877 00:11:24.877 real 0m0.280s 00:11:24.877 user 0m0.099s 00:11:24.877 sys 0m0.087s 00:11:24.877 17:12:10 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:24.877 17:12:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:11:24.877 ************************************ 00:11:24.877 END TEST env_dpdk_post_init 00:11:24.877 ************************************ 00:11:24.877 17:12:10 env -- env/env.sh@26 -- # uname 00:11:24.877 17:12:10 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:11:24.877 17:12:10 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:24.877 17:12:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:24.877 17:12:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:24.877 17:12:10 env -- common/autotest_common.sh@10 -- # set +x 00:11:24.877 ************************************ 00:11:24.877 START TEST env_mem_callbacks 00:11:24.877 ************************************ 00:11:24.877 17:12:10 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:24.877 EAL: Detected CPU lcores: 10 00:11:24.877 EAL: Detected NUMA nodes: 1 00:11:24.877 EAL: Detected shared linkage of DPDK 00:11:24.877 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:24.877 EAL: Selected IOVA mode 'PA' 00:11:25.135 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:25.135 00:11:25.135 00:11:25.135 CUnit - A unit testing framework for C - Version 2.1-3 00:11:25.135 http://cunit.sourceforge.net/ 00:11:25.135 00:11:25.135 00:11:25.135 Suite: memory 00:11:25.135 Test: test ... 00:11:25.135 register 0x200000200000 2097152 00:11:25.135 malloc 3145728 00:11:25.135 register 0x200000400000 4194304 00:11:25.135 buf 0x2000004fffc0 len 3145728 PASSED 00:11:25.135 malloc 64 00:11:25.135 buf 0x2000004ffec0 len 64 PASSED 00:11:25.135 malloc 4194304 00:11:25.135 register 0x200000800000 6291456 00:11:25.135 buf 0x2000009fffc0 len 4194304 PASSED 00:11:25.135 free 0x2000004fffc0 3145728 00:11:25.135 free 0x2000004ffec0 64 00:11:25.135 unregister 0x200000400000 4194304 PASSED 00:11:25.135 free 0x2000009fffc0 4194304 00:11:25.135 unregister 0x200000800000 6291456 PASSED 00:11:25.135 malloc 8388608 00:11:25.135 register 0x200000400000 10485760 00:11:25.135 buf 0x2000005fffc0 len 8388608 PASSED 00:11:25.135 free 0x2000005fffc0 8388608 00:11:25.135 unregister 0x200000400000 10485760 PASSED 00:11:25.135 passed 00:11:25.135 00:11:25.135 Run Summary: Type Total Ran Passed Failed Inactive 00:11:25.135 suites 1 1 n/a 0 0 00:11:25.135 tests 1 1 1 0 0 00:11:25.135 asserts 15 15 15 0 n/a 00:11:25.135 00:11:25.135 Elapsed time = 0.057 seconds 00:11:25.135 00:11:25.135 real 0m0.271s 00:11:25.135 user 0m0.099s 00:11:25.135 sys 0m0.070s 00:11:25.135 17:12:11 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:25.135 17:12:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:11:25.135 ************************************ 00:11:25.135 END TEST env_mem_callbacks 00:11:25.135 ************************************ 00:11:25.135 00:11:25.135 real 0m9.055s 00:11:25.135 user 0m7.228s 00:11:25.135 sys 0m1.439s 00:11:25.135 17:12:11 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:25.135 17:12:11 env -- common/autotest_common.sh@10 -- # set +x 00:11:25.135 ************************************ 00:11:25.135 END TEST env 00:11:25.135 ************************************ 00:11:25.135 17:12:11 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:25.135 17:12:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:25.135 17:12:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:25.135 17:12:11 -- common/autotest_common.sh@10 -- # set +x 00:11:25.135 ************************************ 00:11:25.135 START TEST rpc 00:11:25.135 ************************************ 00:11:25.135 17:12:11 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:25.392 * Looking for test storage... 00:11:25.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:25.392 17:12:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=62000 00:11:25.392 17:12:11 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:11:25.392 17:12:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:25.392 17:12:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 62000 00:11:25.392 17:12:11 rpc -- common/autotest_common.sh@831 -- # '[' -z 62000 ']' 00:11:25.392 17:12:11 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.392 17:12:11 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:25.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.392 17:12:11 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.392 17:12:11 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:25.392 17:12:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:25.392 [2024-07-24 17:12:11.527929] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:11:25.392 [2024-07-24 17:12:11.528132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62000 ] 00:11:25.650 [2024-07-24 17:12:11.702145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.908 [2024-07-24 17:12:11.932988] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:11:25.908 [2024-07-24 17:12:11.933064] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 62000' to capture a snapshot of events at runtime. 00:11:25.908 [2024-07-24 17:12:11.933083] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.908 [2024-07-24 17:12:11.933095] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.908 [2024-07-24 17:12:11.933108] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid62000 for offline analysis/debug. 00:11:25.908 [2024-07-24 17:12:11.933145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.841 17:12:12 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:26.841 17:12:12 rpc -- common/autotest_common.sh@864 -- # return 0 00:11:26.841 17:12:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:26.841 17:12:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:26.841 17:12:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:11:26.841 17:12:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:11:26.841 17:12:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:26.841 17:12:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:26.841 17:12:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.841 ************************************ 00:11:26.841 START TEST rpc_integrity 00:11:26.841 ************************************ 00:11:26.841 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:11:26.841 17:12:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.841 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.842 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:26.842 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.842 17:12:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:26.842 17:12:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:26.842 17:12:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:26.842 17:12:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:26.842 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.842 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:26.842 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.842 17:12:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:11:26.842 17:12:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:26.842 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.842 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:26.842 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.842 17:12:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:26.842 { 00:11:26.842 "name": "Malloc0", 00:11:26.842 "aliases": [ 00:11:26.842 "ad81f7a1-1537-4c58-9491-5f30007a02b2" 00:11:26.842 ], 00:11:26.842 "product_name": "Malloc disk", 00:11:26.842 "block_size": 512, 00:11:26.842 "num_blocks": 16384, 00:11:26.842 "uuid": "ad81f7a1-1537-4c58-9491-5f30007a02b2", 00:11:26.842 "assigned_rate_limits": { 00:11:26.842 "rw_ios_per_sec": 0, 00:11:26.842 "rw_mbytes_per_sec": 0, 00:11:26.842 "r_mbytes_per_sec": 0, 00:11:26.842 "w_mbytes_per_sec": 0 00:11:26.842 }, 00:11:26.842 "claimed": false, 00:11:26.842 "zoned": false, 00:11:26.842 "supported_io_types": { 00:11:26.842 "read": true, 00:11:26.842 "write": true, 00:11:26.842 "unmap": true, 00:11:26.842 "flush": true, 00:11:26.842 "reset": true, 00:11:26.842 "nvme_admin": false, 00:11:26.842 "nvme_io": false, 00:11:26.842 "nvme_io_md": false, 00:11:26.842 "write_zeroes": true, 00:11:26.842 "zcopy": true, 00:11:26.842 "get_zone_info": false, 00:11:26.842 "zone_management": false, 00:11:26.842 "zone_append": false, 00:11:26.842 "compare": false, 00:11:26.842 "compare_and_write": false, 00:11:26.842 "abort": true, 00:11:26.842 "seek_hole": false, 00:11:26.842 "seek_data": false, 00:11:26.842 "copy": true, 00:11:26.842 "nvme_iov_md": false 00:11:26.842 }, 00:11:26.842 "memory_domains": [ 00:11:26.842 { 00:11:26.842 "dma_device_id": "system", 00:11:26.842 "dma_device_type": 1 00:11:26.842 }, 00:11:26.842 { 00:11:26.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.842 "dma_device_type": 2 00:11:26.842 } 00:11:26.842 ], 00:11:26.842 "driver_specific": {} 00:11:26.842 } 00:11:26.842 ]' 00:11:26.842 17:12:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:26.842 17:12:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:26.842 17:12:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:11:26.842 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.842 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:26.842 [2024-07-24 17:12:12.876055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:11:26.842 [2024-07-24 17:12:12.876171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.842 [2024-07-24 17:12:12.876214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:26.842 [2024-07-24 17:12:12.876230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.842 [2024-07-24 17:12:12.879095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.842 [2024-07-24 17:12:12.879150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:26.842 Passthru0 00:11:26.842 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.842 17:12:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:26.842 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.842 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:26.842 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.842 17:12:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:26.842 { 00:11:26.842 "name": "Malloc0", 00:11:26.842 "aliases": [ 00:11:26.842 "ad81f7a1-1537-4c58-9491-5f30007a02b2" 00:11:26.842 ], 00:11:26.842 "product_name": "Malloc disk", 00:11:26.842 "block_size": 512, 00:11:26.842 "num_blocks": 16384, 00:11:26.842 "uuid": "ad81f7a1-1537-4c58-9491-5f30007a02b2", 00:11:26.842 "assigned_rate_limits": { 00:11:26.842 "rw_ios_per_sec": 0, 00:11:26.842 "rw_mbytes_per_sec": 0, 00:11:26.842 "r_mbytes_per_sec": 0, 00:11:26.842 "w_mbytes_per_sec": 0 00:11:26.842 }, 00:11:26.842 "claimed": true, 00:11:26.842 "claim_type": "exclusive_write", 00:11:26.842 "zoned": false, 00:11:26.842 "supported_io_types": { 00:11:26.842 "read": true, 00:11:26.842 "write": true, 00:11:26.842 "unmap": true, 00:11:26.842 "flush": true, 00:11:26.842 "reset": true, 00:11:26.842 "nvme_admin": false, 00:11:26.842 "nvme_io": false, 00:11:26.842 "nvme_io_md": false, 00:11:26.842 "write_zeroes": true, 00:11:26.842 "zcopy": true, 00:11:26.842 "get_zone_info": false, 00:11:26.842 "zone_management": false, 00:11:26.842 "zone_append": false, 00:11:26.842 "compare": false, 00:11:26.842 "compare_and_write": false, 00:11:26.842 "abort": true, 00:11:26.842 "seek_hole": false, 00:11:26.842 "seek_data": false, 00:11:26.842 "copy": true, 00:11:26.842 "nvme_iov_md": false 00:11:26.842 }, 00:11:26.842 "memory_domains": [ 00:11:26.842 { 00:11:26.842 "dma_device_id": "system", 00:11:26.842 "dma_device_type": 1 00:11:26.842 }, 00:11:26.842 { 00:11:26.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.843 "dma_device_type": 2 00:11:26.843 } 00:11:26.843 ], 00:11:26.843 "driver_specific": {} 00:11:26.843 }, 00:11:26.843 { 00:11:26.843 "name": "Passthru0", 00:11:26.843 "aliases": [ 00:11:26.843 "7bb9a2c8-0b18-5f94-97a4-9ca0584dcdd2" 00:11:26.843 ], 00:11:26.843 "product_name": "passthru", 00:11:26.843 "block_size": 512, 00:11:26.843 "num_blocks": 16384, 00:11:26.843 "uuid": "7bb9a2c8-0b18-5f94-97a4-9ca0584dcdd2", 00:11:26.843 "assigned_rate_limits": { 00:11:26.843 "rw_ios_per_sec": 0, 00:11:26.843 "rw_mbytes_per_sec": 0, 00:11:26.843 "r_mbytes_per_sec": 0, 00:11:26.843 "w_mbytes_per_sec": 0 00:11:26.843 }, 00:11:26.843 "claimed": false, 00:11:26.843 "zoned": false, 00:11:26.843 "supported_io_types": { 00:11:26.843 "read": true, 00:11:26.843 "write": true, 00:11:26.843 "unmap": true, 00:11:26.843 "flush": true, 00:11:26.843 "reset": true, 00:11:26.843 "nvme_admin": false, 00:11:26.843 "nvme_io": false, 00:11:26.843 "nvme_io_md": false, 00:11:26.843 "write_zeroes": true, 00:11:26.843 "zcopy": true, 00:11:26.843 "get_zone_info": false, 00:11:26.843 "zone_management": false, 00:11:26.843 "zone_append": false, 00:11:26.843 "compare": false, 00:11:26.843 "compare_and_write": false, 00:11:26.843 "abort": true, 00:11:26.843 "seek_hole": false, 00:11:26.843 "seek_data": false, 00:11:26.843 "copy": true, 00:11:26.843 "nvme_iov_md": false 00:11:26.843 }, 00:11:26.843 "memory_domains": [ 00:11:26.843 { 00:11:26.843 "dma_device_id": "system", 00:11:26.843 "dma_device_type": 1 00:11:26.843 }, 00:11:26.843 { 00:11:26.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.843 "dma_device_type": 2 00:11:26.843 } 00:11:26.843 ], 00:11:26.843 "driver_specific": { 00:11:26.843 "passthru": { 00:11:26.843 "name": "Passthru0", 00:11:26.843 "base_bdev_name": "Malloc0" 00:11:26.843 } 00:11:26.843 } 00:11:26.843 } 00:11:26.843 ]' 00:11:26.843 17:12:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:26.843 17:12:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:26.843 17:12:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:26.843 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.843 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:26.843 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.843 17:12:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:11:26.843 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.843 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:26.843 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.843 17:12:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:26.843 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.843 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:26.843 17:12:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.843 17:12:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:26.843 17:12:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:26.843 17:12:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:26.843 00:11:26.843 real 0m0.321s 00:11:26.843 user 0m0.196s 00:11:26.843 sys 0m0.035s 00:11:26.843 17:12:13 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:26.843 17:12:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:26.843 ************************************ 00:11:26.843 END TEST rpc_integrity 00:11:26.843 ************************************ 00:11:27.101 17:12:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:11:27.101 17:12:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:27.101 17:12:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.101 17:12:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.101 ************************************ 00:11:27.101 START TEST rpc_plugins 00:11:27.101 ************************************ 00:11:27.101 17:12:13 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:11:27.101 17:12:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:11:27.101 17:12:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.101 17:12:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:27.101 17:12:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.101 17:12:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:11:27.101 17:12:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:11:27.101 17:12:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.101 17:12:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:27.101 17:12:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.101 17:12:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:11:27.101 { 00:11:27.101 "name": "Malloc1", 00:11:27.101 "aliases": [ 00:11:27.101 "5a908959-dcbd-4c48-800b-39c1238c60ce" 00:11:27.101 ], 00:11:27.101 "product_name": "Malloc disk", 00:11:27.101 "block_size": 4096, 00:11:27.101 "num_blocks": 256, 00:11:27.101 "uuid": "5a908959-dcbd-4c48-800b-39c1238c60ce", 00:11:27.101 "assigned_rate_limits": { 00:11:27.101 "rw_ios_per_sec": 0, 00:11:27.101 "rw_mbytes_per_sec": 0, 00:11:27.101 "r_mbytes_per_sec": 0, 00:11:27.101 "w_mbytes_per_sec": 0 00:11:27.101 }, 00:11:27.101 "claimed": false, 00:11:27.101 "zoned": false, 00:11:27.101 "supported_io_types": { 00:11:27.101 "read": true, 00:11:27.101 "write": true, 00:11:27.101 "unmap": true, 00:11:27.101 "flush": true, 00:11:27.101 "reset": true, 00:11:27.101 "nvme_admin": false, 00:11:27.101 "nvme_io": false, 00:11:27.101 "nvme_io_md": false, 00:11:27.101 "write_zeroes": true, 00:11:27.101 "zcopy": true, 00:11:27.101 "get_zone_info": false, 00:11:27.101 "zone_management": false, 00:11:27.101 "zone_append": false, 00:11:27.101 "compare": false, 00:11:27.101 "compare_and_write": false, 00:11:27.101 "abort": true, 00:11:27.101 "seek_hole": false, 00:11:27.101 "seek_data": false, 00:11:27.101 "copy": true, 00:11:27.101 "nvme_iov_md": false 00:11:27.101 }, 00:11:27.101 "memory_domains": [ 00:11:27.101 { 00:11:27.101 "dma_device_id": "system", 00:11:27.101 "dma_device_type": 1 00:11:27.101 }, 00:11:27.101 { 00:11:27.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.101 "dma_device_type": 2 00:11:27.102 } 00:11:27.102 ], 00:11:27.102 "driver_specific": {} 00:11:27.102 } 00:11:27.102 ]' 00:11:27.102 17:12:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:11:27.102 17:12:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:11:27.102 17:12:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:11:27.102 17:12:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.102 17:12:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:27.102 17:12:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.102 17:12:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:11:27.102 17:12:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.102 17:12:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:27.102 17:12:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.102 17:12:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:11:27.102 17:12:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:11:27.102 17:12:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:11:27.102 00:11:27.102 real 0m0.153s 00:11:27.102 user 0m0.103s 00:11:27.102 sys 0m0.015s 00:11:27.102 17:12:13 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.102 ************************************ 00:11:27.102 END TEST rpc_plugins 00:11:27.102 17:12:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:27.102 ************************************ 00:11:27.102 17:12:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:11:27.102 17:12:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:27.102 17:12:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.102 17:12:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.102 ************************************ 00:11:27.102 START TEST rpc_trace_cmd_test 00:11:27.102 ************************************ 00:11:27.102 17:12:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:11:27.102 17:12:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:11:27.102 17:12:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:11:27.102 17:12:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.102 17:12:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.102 17:12:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.102 17:12:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:11:27.102 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid62000", 00:11:27.102 "tpoint_group_mask": "0x8", 00:11:27.102 "iscsi_conn": { 00:11:27.102 "mask": "0x2", 00:11:27.102 "tpoint_mask": "0x0" 00:11:27.102 }, 00:11:27.102 "scsi": { 00:11:27.102 "mask": "0x4", 00:11:27.102 "tpoint_mask": "0x0" 00:11:27.102 }, 00:11:27.102 "bdev": { 00:11:27.102 "mask": "0x8", 00:11:27.102 "tpoint_mask": "0xffffffffffffffff" 00:11:27.102 }, 00:11:27.102 "nvmf_rdma": { 00:11:27.102 "mask": "0x10", 00:11:27.102 "tpoint_mask": "0x0" 00:11:27.102 }, 00:11:27.102 "nvmf_tcp": { 00:11:27.102 "mask": "0x20", 00:11:27.102 "tpoint_mask": "0x0" 00:11:27.102 }, 00:11:27.102 "ftl": { 00:11:27.102 "mask": "0x40", 00:11:27.102 "tpoint_mask": "0x0" 00:11:27.102 }, 00:11:27.102 "blobfs": { 00:11:27.102 "mask": "0x80", 00:11:27.102 "tpoint_mask": "0x0" 00:11:27.102 }, 00:11:27.102 "dsa": { 00:11:27.102 "mask": "0x200", 00:11:27.102 "tpoint_mask": "0x0" 00:11:27.102 }, 00:11:27.102 "thread": { 00:11:27.102 "mask": "0x400", 00:11:27.102 "tpoint_mask": "0x0" 00:11:27.102 }, 00:11:27.102 "nvme_pcie": { 00:11:27.102 "mask": "0x800", 00:11:27.102 "tpoint_mask": "0x0" 00:11:27.102 }, 00:11:27.102 "iaa": { 00:11:27.102 "mask": "0x1000", 00:11:27.102 "tpoint_mask": "0x0" 00:11:27.102 }, 00:11:27.102 "nvme_tcp": { 00:11:27.102 "mask": "0x2000", 00:11:27.102 "tpoint_mask": "0x0" 00:11:27.102 }, 00:11:27.102 "bdev_nvme": { 00:11:27.102 "mask": "0x4000", 00:11:27.102 "tpoint_mask": "0x0" 00:11:27.102 }, 00:11:27.102 "sock": { 00:11:27.102 "mask": "0x8000", 00:11:27.102 "tpoint_mask": "0x0" 00:11:27.102 } 00:11:27.102 }' 00:11:27.102 17:12:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:11:27.358 17:12:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:11:27.358 17:12:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:11:27.358 17:12:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:11:27.358 17:12:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:11:27.358 17:12:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:11:27.358 17:12:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:11:27.358 17:12:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:11:27.358 17:12:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:11:27.358 17:12:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:11:27.358 00:11:27.358 real 0m0.270s 00:11:27.358 user 0m0.238s 00:11:27.358 sys 0m0.022s 00:11:27.358 17:12:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.358 17:12:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.358 ************************************ 00:11:27.358 END TEST rpc_trace_cmd_test 00:11:27.358 ************************************ 00:11:27.616 17:12:13 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:11:27.616 17:12:13 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:11:27.616 17:12:13 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:11:27.616 17:12:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:27.616 17:12:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.616 17:12:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.616 ************************************ 00:11:27.616 START TEST rpc_daemon_integrity 00:11:27.616 ************************************ 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:27.616 { 00:11:27.616 "name": "Malloc2", 00:11:27.616 "aliases": [ 00:11:27.616 "1c658de9-dd4e-4a72-bd0d-4245551a2c79" 00:11:27.616 ], 00:11:27.616 "product_name": "Malloc disk", 00:11:27.616 "block_size": 512, 00:11:27.616 "num_blocks": 16384, 00:11:27.616 "uuid": "1c658de9-dd4e-4a72-bd0d-4245551a2c79", 00:11:27.616 "assigned_rate_limits": { 00:11:27.616 "rw_ios_per_sec": 0, 00:11:27.616 "rw_mbytes_per_sec": 0, 00:11:27.616 "r_mbytes_per_sec": 0, 00:11:27.616 "w_mbytes_per_sec": 0 00:11:27.616 }, 00:11:27.616 "claimed": false, 00:11:27.616 "zoned": false, 00:11:27.616 "supported_io_types": { 00:11:27.616 "read": true, 00:11:27.616 "write": true, 00:11:27.616 "unmap": true, 00:11:27.616 "flush": true, 00:11:27.616 "reset": true, 00:11:27.616 "nvme_admin": false, 00:11:27.616 "nvme_io": false, 00:11:27.616 "nvme_io_md": false, 00:11:27.616 "write_zeroes": true, 00:11:27.616 "zcopy": true, 00:11:27.616 "get_zone_info": false, 00:11:27.616 "zone_management": false, 00:11:27.616 "zone_append": false, 00:11:27.616 "compare": false, 00:11:27.616 "compare_and_write": false, 00:11:27.616 "abort": true, 00:11:27.616 "seek_hole": false, 00:11:27.616 "seek_data": false, 00:11:27.616 "copy": true, 00:11:27.616 "nvme_iov_md": false 00:11:27.616 }, 00:11:27.616 "memory_domains": [ 00:11:27.616 { 00:11:27.616 "dma_device_id": "system", 00:11:27.616 "dma_device_type": 1 00:11:27.616 }, 00:11:27.616 { 00:11:27.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.616 "dma_device_type": 2 00:11:27.616 } 00:11:27.616 ], 00:11:27.616 "driver_specific": {} 00:11:27.616 } 00:11:27.616 ]' 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:27.616 [2024-07-24 17:12:13.771459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:11:27.616 [2024-07-24 17:12:13.771548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.616 [2024-07-24 17:12:13.771583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:27.616 [2024-07-24 17:12:13.771599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.616 [2024-07-24 17:12:13.774395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.616 [2024-07-24 17:12:13.774451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:27.616 Passthru0 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.616 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:27.616 { 00:11:27.616 "name": "Malloc2", 00:11:27.616 "aliases": [ 00:11:27.616 "1c658de9-dd4e-4a72-bd0d-4245551a2c79" 00:11:27.616 ], 00:11:27.616 "product_name": "Malloc disk", 00:11:27.616 "block_size": 512, 00:11:27.617 "num_blocks": 16384, 00:11:27.617 "uuid": "1c658de9-dd4e-4a72-bd0d-4245551a2c79", 00:11:27.617 "assigned_rate_limits": { 00:11:27.617 "rw_ios_per_sec": 0, 00:11:27.617 "rw_mbytes_per_sec": 0, 00:11:27.617 "r_mbytes_per_sec": 0, 00:11:27.617 "w_mbytes_per_sec": 0 00:11:27.617 }, 00:11:27.617 "claimed": true, 00:11:27.617 "claim_type": "exclusive_write", 00:11:27.617 "zoned": false, 00:11:27.617 "supported_io_types": { 00:11:27.617 "read": true, 00:11:27.617 "write": true, 00:11:27.617 "unmap": true, 00:11:27.617 "flush": true, 00:11:27.617 "reset": true, 00:11:27.617 "nvme_admin": false, 00:11:27.617 "nvme_io": false, 00:11:27.617 "nvme_io_md": false, 00:11:27.617 "write_zeroes": true, 00:11:27.617 "zcopy": true, 00:11:27.617 "get_zone_info": false, 00:11:27.617 "zone_management": false, 00:11:27.617 "zone_append": false, 00:11:27.617 "compare": false, 00:11:27.617 "compare_and_write": false, 00:11:27.617 "abort": true, 00:11:27.617 "seek_hole": false, 00:11:27.617 "seek_data": false, 00:11:27.617 "copy": true, 00:11:27.617 "nvme_iov_md": false 00:11:27.617 }, 00:11:27.617 "memory_domains": [ 00:11:27.617 { 00:11:27.617 "dma_device_id": "system", 00:11:27.617 "dma_device_type": 1 00:11:27.617 }, 00:11:27.617 { 00:11:27.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.617 "dma_device_type": 2 00:11:27.617 } 00:11:27.617 ], 00:11:27.617 "driver_specific": {} 00:11:27.617 }, 00:11:27.617 { 00:11:27.617 "name": "Passthru0", 00:11:27.617 "aliases": [ 00:11:27.617 "02cd2e66-6587-5a9c-a25b-06beee3b8bd3" 00:11:27.617 ], 00:11:27.617 "product_name": "passthru", 00:11:27.617 "block_size": 512, 00:11:27.617 "num_blocks": 16384, 00:11:27.617 "uuid": "02cd2e66-6587-5a9c-a25b-06beee3b8bd3", 00:11:27.617 "assigned_rate_limits": { 00:11:27.617 "rw_ios_per_sec": 0, 00:11:27.617 "rw_mbytes_per_sec": 0, 00:11:27.617 "r_mbytes_per_sec": 0, 00:11:27.617 "w_mbytes_per_sec": 0 00:11:27.617 }, 00:11:27.617 "claimed": false, 00:11:27.617 "zoned": false, 00:11:27.617 "supported_io_types": { 00:11:27.617 "read": true, 00:11:27.617 "write": true, 00:11:27.617 "unmap": true, 00:11:27.617 "flush": true, 00:11:27.617 "reset": true, 00:11:27.617 "nvme_admin": false, 00:11:27.617 "nvme_io": false, 00:11:27.617 "nvme_io_md": false, 00:11:27.617 "write_zeroes": true, 00:11:27.617 "zcopy": true, 00:11:27.617 "get_zone_info": false, 00:11:27.617 "zone_management": false, 00:11:27.617 "zone_append": false, 00:11:27.617 "compare": false, 00:11:27.617 "compare_and_write": false, 00:11:27.617 "abort": true, 00:11:27.617 "seek_hole": false, 00:11:27.617 "seek_data": false, 00:11:27.617 "copy": true, 00:11:27.617 "nvme_iov_md": false 00:11:27.617 }, 00:11:27.617 "memory_domains": [ 00:11:27.617 { 00:11:27.617 "dma_device_id": "system", 00:11:27.617 "dma_device_type": 1 00:11:27.617 }, 00:11:27.617 { 00:11:27.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.617 "dma_device_type": 2 00:11:27.617 } 00:11:27.617 ], 00:11:27.617 "driver_specific": { 00:11:27.617 "passthru": { 00:11:27.617 "name": "Passthru0", 00:11:27.617 "base_bdev_name": "Malloc2" 00:11:27.617 } 00:11:27.617 } 00:11:27.617 } 00:11:27.617 ]' 00:11:27.617 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:27.617 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:27.617 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:27.617 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.617 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:27.875 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.875 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:11:27.875 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.875 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:27.875 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.875 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:27.875 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.875 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:27.875 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.875 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:27.875 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:27.875 17:12:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:27.875 00:11:27.875 real 0m0.341s 00:11:27.875 user 0m0.215s 00:11:27.875 sys 0m0.033s 00:11:27.875 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.875 17:12:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:27.875 ************************************ 00:11:27.875 END TEST rpc_daemon_integrity 00:11:27.875 ************************************ 00:11:27.875 17:12:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:11:27.875 17:12:13 rpc -- rpc/rpc.sh@84 -- # killprocess 62000 00:11:27.875 17:12:13 rpc -- common/autotest_common.sh@950 -- # '[' -z 62000 ']' 00:11:27.875 17:12:13 rpc -- common/autotest_common.sh@954 -- # kill -0 62000 00:11:27.875 17:12:13 rpc -- common/autotest_common.sh@955 -- # uname 00:11:27.875 17:12:13 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:27.875 17:12:13 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62000 00:11:27.875 17:12:14 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:27.875 17:12:14 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:27.875 17:12:14 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62000' 00:11:27.875 killing process with pid 62000 00:11:27.875 17:12:14 rpc -- common/autotest_common.sh@969 -- # kill 62000 00:11:27.875 17:12:14 rpc -- common/autotest_common.sh@974 -- # wait 62000 00:11:30.466 00:11:30.466 real 0m4.816s 00:11:30.466 user 0m5.430s 00:11:30.466 sys 0m0.796s 00:11:30.466 17:12:16 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:30.466 ************************************ 00:11:30.466 END TEST rpc 00:11:30.466 ************************************ 00:11:30.466 17:12:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.466 17:12:16 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:30.466 17:12:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:30.466 17:12:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.466 17:12:16 -- common/autotest_common.sh@10 -- # set +x 00:11:30.466 ************************************ 00:11:30.466 START TEST skip_rpc 00:11:30.466 ************************************ 00:11:30.466 17:12:16 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:30.466 * Looking for test storage... 00:11:30.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:30.466 17:12:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:30.466 17:12:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:30.466 17:12:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:11:30.466 17:12:16 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:30.466 17:12:16 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.466 17:12:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.466 ************************************ 00:11:30.466 START TEST skip_rpc 00:11:30.466 ************************************ 00:11:30.466 17:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:11:30.466 17:12:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62221 00:11:30.466 17:12:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:30.466 17:12:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:11:30.466 17:12:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:11:30.466 [2024-07-24 17:12:16.406582] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:11:30.466 [2024-07-24 17:12:16.406801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62221 ] 00:11:30.466 [2024-07-24 17:12:16.586191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.724 [2024-07-24 17:12:16.848676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62221 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 62221 ']' 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 62221 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62221 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:36.006 killing process with pid 62221 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62221' 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 62221 00:11:36.006 17:12:21 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 62221 00:11:37.378 00:11:37.378 real 0m7.229s 00:11:37.378 user 0m6.666s 00:11:37.378 sys 0m0.446s 00:11:37.378 17:12:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:37.378 ************************************ 00:11:37.378 17:12:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.378 END TEST skip_rpc 00:11:37.378 ************************************ 00:11:37.378 17:12:23 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:11:37.378 17:12:23 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:37.378 17:12:23 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.378 17:12:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.378 ************************************ 00:11:37.378 START TEST skip_rpc_with_json 00:11:37.378 ************************************ 00:11:37.378 17:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:11:37.378 17:12:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:11:37.378 17:12:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62325 00:11:37.378 17:12:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:37.378 17:12:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62325 00:11:37.378 17:12:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:37.378 17:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 62325 ']' 00:11:37.378 17:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.378 17:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:37.378 17:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.379 17:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:37.379 17:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:37.636 [2024-07-24 17:12:23.686714] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:11:37.636 [2024-07-24 17:12:23.686959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62325 ] 00:11:37.636 [2024-07-24 17:12:23.856138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.893 [2024-07-24 17:12:24.087485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.873 17:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:38.873 17:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:11:38.873 17:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:11:38.873 17:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.873 17:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:38.873 [2024-07-24 17:12:24.849545] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:11:38.873 request: 00:11:38.873 { 00:11:38.873 "trtype": "tcp", 00:11:38.873 "method": "nvmf_get_transports", 00:11:38.873 "req_id": 1 00:11:38.873 } 00:11:38.873 Got JSON-RPC error response 00:11:38.873 response: 00:11:38.873 { 00:11:38.873 "code": -19, 00:11:38.873 "message": "No such device" 00:11:38.873 } 00:11:38.873 17:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:38.873 17:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:11:38.873 17:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.873 17:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:38.873 [2024-07-24 17:12:24.861681] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.874 17:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.874 17:12:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:11:38.874 17:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.874 17:12:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:38.874 17:12:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.874 17:12:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:38.874 { 00:11:38.874 "subsystems": [ 00:11:38.874 { 00:11:38.874 "subsystem": "keyring", 00:11:38.874 "config": [] 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "subsystem": "iobuf", 00:11:38.874 "config": [ 00:11:38.874 { 00:11:38.874 "method": "iobuf_set_options", 00:11:38.874 "params": { 00:11:38.874 "small_pool_count": 8192, 00:11:38.874 "large_pool_count": 1024, 00:11:38.874 "small_bufsize": 8192, 00:11:38.874 "large_bufsize": 135168 00:11:38.874 } 00:11:38.874 } 00:11:38.874 ] 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "subsystem": "sock", 00:11:38.874 "config": [ 00:11:38.874 { 00:11:38.874 "method": "sock_set_default_impl", 00:11:38.874 "params": { 00:11:38.874 "impl_name": "posix" 00:11:38.874 } 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "method": "sock_impl_set_options", 00:11:38.874 "params": { 00:11:38.874 "impl_name": "ssl", 00:11:38.874 "recv_buf_size": 4096, 00:11:38.874 "send_buf_size": 4096, 00:11:38.874 "enable_recv_pipe": true, 00:11:38.874 "enable_quickack": false, 00:11:38.874 "enable_placement_id": 0, 00:11:38.874 "enable_zerocopy_send_server": true, 00:11:38.874 "enable_zerocopy_send_client": false, 00:11:38.874 "zerocopy_threshold": 0, 00:11:38.874 "tls_version": 0, 00:11:38.874 "enable_ktls": false 00:11:38.874 } 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "method": "sock_impl_set_options", 00:11:38.874 "params": { 00:11:38.874 "impl_name": "posix", 00:11:38.874 "recv_buf_size": 2097152, 00:11:38.874 "send_buf_size": 2097152, 00:11:38.874 "enable_recv_pipe": true, 00:11:38.874 "enable_quickack": false, 00:11:38.874 "enable_placement_id": 0, 00:11:38.874 "enable_zerocopy_send_server": true, 00:11:38.874 "enable_zerocopy_send_client": false, 00:11:38.874 "zerocopy_threshold": 0, 00:11:38.874 "tls_version": 0, 00:11:38.874 "enable_ktls": false 00:11:38.874 } 00:11:38.874 } 00:11:38.874 ] 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "subsystem": "vmd", 00:11:38.874 "config": [] 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "subsystem": "accel", 00:11:38.874 "config": [ 00:11:38.874 { 00:11:38.874 "method": "accel_set_options", 00:11:38.874 "params": { 00:11:38.874 "small_cache_size": 128, 00:11:38.874 "large_cache_size": 16, 00:11:38.874 "task_count": 2048, 00:11:38.874 "sequence_count": 2048, 00:11:38.874 "buf_count": 2048 00:11:38.874 } 00:11:38.874 } 00:11:38.874 ] 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "subsystem": "bdev", 00:11:38.874 "config": [ 00:11:38.874 { 00:11:38.874 "method": "bdev_set_options", 00:11:38.874 "params": { 00:11:38.874 "bdev_io_pool_size": 65535, 00:11:38.874 "bdev_io_cache_size": 256, 00:11:38.874 "bdev_auto_examine": true, 00:11:38.874 "iobuf_small_cache_size": 128, 00:11:38.874 "iobuf_large_cache_size": 16 00:11:38.874 } 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "method": "bdev_raid_set_options", 00:11:38.874 "params": { 00:11:38.874 "process_window_size_kb": 1024, 00:11:38.874 "process_max_bandwidth_mb_sec": 0 00:11:38.874 } 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "method": "bdev_iscsi_set_options", 00:11:38.874 "params": { 00:11:38.874 "timeout_sec": 30 00:11:38.874 } 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "method": "bdev_nvme_set_options", 00:11:38.874 "params": { 00:11:38.874 "action_on_timeout": "none", 00:11:38.874 "timeout_us": 0, 00:11:38.874 "timeout_admin_us": 0, 00:11:38.874 "keep_alive_timeout_ms": 10000, 00:11:38.874 "arbitration_burst": 0, 00:11:38.874 "low_priority_weight": 0, 00:11:38.874 "medium_priority_weight": 0, 00:11:38.874 "high_priority_weight": 0, 00:11:38.874 "nvme_adminq_poll_period_us": 10000, 00:11:38.874 "nvme_ioq_poll_period_us": 0, 00:11:38.874 "io_queue_requests": 0, 00:11:38.874 "delay_cmd_submit": true, 00:11:38.874 "transport_retry_count": 4, 00:11:38.874 "bdev_retry_count": 3, 00:11:38.874 "transport_ack_timeout": 0, 00:11:38.874 "ctrlr_loss_timeout_sec": 0, 00:11:38.874 "reconnect_delay_sec": 0, 00:11:38.874 "fast_io_fail_timeout_sec": 0, 00:11:38.874 "disable_auto_failback": false, 00:11:38.874 "generate_uuids": false, 00:11:38.874 "transport_tos": 0, 00:11:38.874 "nvme_error_stat": false, 00:11:38.874 "rdma_srq_size": 0, 00:11:38.874 "io_path_stat": false, 00:11:38.874 "allow_accel_sequence": false, 00:11:38.874 "rdma_max_cq_size": 0, 00:11:38.874 "rdma_cm_event_timeout_ms": 0, 00:11:38.874 "dhchap_digests": [ 00:11:38.874 "sha256", 00:11:38.874 "sha384", 00:11:38.874 "sha512" 00:11:38.874 ], 00:11:38.874 "dhchap_dhgroups": [ 00:11:38.874 "null", 00:11:38.874 "ffdhe2048", 00:11:38.874 "ffdhe3072", 00:11:38.874 "ffdhe4096", 00:11:38.874 "ffdhe6144", 00:11:38.874 "ffdhe8192" 00:11:38.874 ] 00:11:38.874 } 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "method": "bdev_nvme_set_hotplug", 00:11:38.874 "params": { 00:11:38.874 "period_us": 100000, 00:11:38.874 "enable": false 00:11:38.874 } 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "method": "bdev_wait_for_examine" 00:11:38.874 } 00:11:38.874 ] 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "subsystem": "scsi", 00:11:38.874 "config": null 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "subsystem": "scheduler", 00:11:38.874 "config": [ 00:11:38.874 { 00:11:38.874 "method": "framework_set_scheduler", 00:11:38.874 "params": { 00:11:38.874 "name": "static" 00:11:38.874 } 00:11:38.874 } 00:11:38.874 ] 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "subsystem": "vhost_scsi", 00:11:38.874 "config": [] 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "subsystem": "vhost_blk", 00:11:38.874 "config": [] 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "subsystem": "ublk", 00:11:38.874 "config": [] 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "subsystem": "nbd", 00:11:38.874 "config": [] 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "subsystem": "nvmf", 00:11:38.874 "config": [ 00:11:38.874 { 00:11:38.874 "method": "nvmf_set_config", 00:11:38.874 "params": { 00:11:38.874 "discovery_filter": "match_any", 00:11:38.874 "admin_cmd_passthru": { 00:11:38.874 "identify_ctrlr": false 00:11:38.874 } 00:11:38.874 } 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "method": "nvmf_set_max_subsystems", 00:11:38.874 "params": { 00:11:38.874 "max_subsystems": 1024 00:11:38.874 } 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "method": "nvmf_set_crdt", 00:11:38.874 "params": { 00:11:38.874 "crdt1": 0, 00:11:38.874 "crdt2": 0, 00:11:38.874 "crdt3": 0 00:11:38.874 } 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "method": "nvmf_create_transport", 00:11:38.874 "params": { 00:11:38.874 "trtype": "TCP", 00:11:38.874 "max_queue_depth": 128, 00:11:38.874 "max_io_qpairs_per_ctrlr": 127, 00:11:38.874 "in_capsule_data_size": 4096, 00:11:38.874 "max_io_size": 131072, 00:11:38.874 "io_unit_size": 131072, 00:11:38.874 "max_aq_depth": 128, 00:11:38.874 "num_shared_buffers": 511, 00:11:38.874 "buf_cache_size": 4294967295, 00:11:38.874 "dif_insert_or_strip": false, 00:11:38.874 "zcopy": false, 00:11:38.874 "c2h_success": true, 00:11:38.874 "sock_priority": 0, 00:11:38.874 "abort_timeout_sec": 1, 00:11:38.874 "ack_timeout": 0, 00:11:38.874 "data_wr_pool_size": 0 00:11:38.874 } 00:11:38.874 } 00:11:38.874 ] 00:11:38.874 }, 00:11:38.874 { 00:11:38.874 "subsystem": "iscsi", 00:11:38.874 "config": [ 00:11:38.874 { 00:11:38.874 "method": "iscsi_set_options", 00:11:38.874 "params": { 00:11:38.874 "node_base": "iqn.2016-06.io.spdk", 00:11:38.874 "max_sessions": 128, 00:11:38.874 "max_connections_per_session": 2, 00:11:38.874 "max_queue_depth": 64, 00:11:38.874 "default_time2wait": 2, 00:11:38.874 "default_time2retain": 20, 00:11:38.874 "first_burst_length": 8192, 00:11:38.874 "immediate_data": true, 00:11:38.874 "allow_duplicated_isid": false, 00:11:38.874 "error_recovery_level": 0, 00:11:38.874 "nop_timeout": 60, 00:11:38.874 "nop_in_interval": 30, 00:11:38.874 "disable_chap": false, 00:11:38.874 "require_chap": false, 00:11:38.874 "mutual_chap": false, 00:11:38.874 "chap_group": 0, 00:11:38.874 "max_large_datain_per_connection": 64, 00:11:38.874 "max_r2t_per_connection": 4, 00:11:38.874 "pdu_pool_size": 36864, 00:11:38.875 "immediate_data_pool_size": 16384, 00:11:38.875 "data_out_pool_size": 2048 00:11:38.875 } 00:11:38.875 } 00:11:38.875 ] 00:11:38.875 } 00:11:38.875 ] 00:11:38.875 } 00:11:38.875 17:12:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:38.875 17:12:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62325 00:11:38.875 17:12:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 62325 ']' 00:11:38.875 17:12:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 62325 00:11:38.875 17:12:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:11:38.875 17:12:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:38.875 17:12:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62325 00:11:38.875 17:12:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:38.875 killing process with pid 62325 00:11:38.875 17:12:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:38.875 17:12:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62325' 00:11:38.875 17:12:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 62325 00:11:38.875 17:12:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 62325 00:11:41.403 17:12:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62375 00:11:41.403 17:12:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:41.403 17:12:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:11:46.671 17:12:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62375 00:11:46.671 17:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 62375 ']' 00:11:46.671 17:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 62375 00:11:46.671 17:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:11:46.671 17:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:46.671 17:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62375 00:11:46.671 17:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:46.671 killing process with pid 62375 00:11:46.671 17:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:46.671 17:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62375' 00:11:46.671 17:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 62375 00:11:46.671 17:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 62375 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:48.572 00:11:48.572 real 0m10.813s 00:11:48.572 user 0m10.185s 00:11:48.572 sys 0m0.940s 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:48.572 ************************************ 00:11:48.572 END TEST skip_rpc_with_json 00:11:48.572 ************************************ 00:11:48.572 17:12:34 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:11:48.572 17:12:34 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:48.572 17:12:34 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:48.572 17:12:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.572 ************************************ 00:11:48.572 START TEST skip_rpc_with_delay 00:11:48.572 ************************************ 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:48.572 [2024-07-24 17:12:34.533913] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:11:48.572 [2024-07-24 17:12:34.534102] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:48.572 00:11:48.572 real 0m0.166s 00:11:48.572 user 0m0.088s 00:11:48.572 sys 0m0.074s 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:48.572 17:12:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:11:48.572 ************************************ 00:11:48.572 END TEST skip_rpc_with_delay 00:11:48.572 ************************************ 00:11:48.572 17:12:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:11:48.572 17:12:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:11:48.572 17:12:34 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:11:48.572 17:12:34 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:48.572 17:12:34 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:48.572 17:12:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.572 ************************************ 00:11:48.572 START TEST exit_on_failed_rpc_init 00:11:48.572 ************************************ 00:11:48.572 17:12:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:11:48.572 17:12:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62509 00:11:48.572 17:12:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:48.572 17:12:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62509 00:11:48.572 17:12:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 62509 ']' 00:11:48.572 17:12:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.572 17:12:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:48.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.572 17:12:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.572 17:12:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:48.572 17:12:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:48.572 [2024-07-24 17:12:34.771984] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:11:48.572 [2024-07-24 17:12:34.772185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62509 ] 00:11:48.830 [2024-07-24 17:12:34.945808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.088 [2024-07-24 17:12:35.176961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.023 17:12:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:50.023 17:12:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:11:50.023 17:12:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:50.023 17:12:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:50.023 17:12:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:11:50.023 17:12:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:50.023 17:12:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:50.023 17:12:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:50.023 17:12:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:50.023 17:12:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:50.023 17:12:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:50.023 17:12:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:50.024 17:12:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:50.024 17:12:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:50.024 17:12:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:50.024 [2024-07-24 17:12:36.062174] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:11:50.024 [2024-07-24 17:12:36.062375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62527 ] 00:11:50.024 [2024-07-24 17:12:36.237615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.590 [2024-07-24 17:12:36.559187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.590 [2024-07-24 17:12:36.559331] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:50.590 [2024-07-24 17:12:36.559358] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:50.590 [2024-07-24 17:12:36.559377] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:50.848 17:12:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:11:50.848 17:12:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:50.848 17:12:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:11:50.848 17:12:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:11:50.848 17:12:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:11:50.848 17:12:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:50.848 17:12:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:50.848 17:12:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62509 00:11:50.848 17:12:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 62509 ']' 00:11:50.848 17:12:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 62509 00:11:50.848 17:12:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:11:50.848 17:12:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:50.848 17:12:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62509 00:11:50.848 17:12:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:50.848 killing process with pid 62509 00:11:50.848 17:12:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:50.848 17:12:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62509' 00:11:50.848 17:12:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 62509 00:11:50.848 17:12:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 62509 00:11:53.390 00:11:53.390 real 0m4.489s 00:11:53.390 user 0m5.145s 00:11:53.390 sys 0m0.679s 00:11:53.390 17:12:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.390 ************************************ 00:11:53.390 END TEST exit_on_failed_rpc_init 00:11:53.390 17:12:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:53.390 ************************************ 00:11:53.390 17:12:39 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:53.390 00:11:53.390 real 0m22.991s 00:11:53.390 user 0m22.181s 00:11:53.390 sys 0m2.316s 00:11:53.390 17:12:39 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.391 17:12:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.391 ************************************ 00:11:53.391 END TEST skip_rpc 00:11:53.391 ************************************ 00:11:53.391 17:12:39 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:53.391 17:12:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:53.391 17:12:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.391 17:12:39 -- common/autotest_common.sh@10 -- # set +x 00:11:53.391 ************************************ 00:11:53.391 START TEST rpc_client 00:11:53.391 ************************************ 00:11:53.391 17:12:39 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:53.391 * Looking for test storage... 00:11:53.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:11:53.391 17:12:39 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:11:53.391 OK 00:11:53.391 17:12:39 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:11:53.391 00:11:53.391 real 0m0.149s 00:11:53.391 user 0m0.079s 00:11:53.391 sys 0m0.077s 00:11:53.391 17:12:39 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.391 17:12:39 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:11:53.391 ************************************ 00:11:53.391 END TEST rpc_client 00:11:53.391 ************************************ 00:11:53.391 17:12:39 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:53.391 17:12:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:53.391 17:12:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.391 17:12:39 -- common/autotest_common.sh@10 -- # set +x 00:11:53.391 ************************************ 00:11:53.391 START TEST json_config 00:11:53.391 ************************************ 00:11:53.391 17:12:39 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:53.391 17:12:39 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7fc73e2f-e911-44c0-81cd-f23b85a0dd5d 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7fc73e2f-e911-44c0-81cd-f23b85a0dd5d 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:53.391 17:12:39 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.391 17:12:39 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.391 17:12:39 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.391 17:12:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.391 17:12:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.391 17:12:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.391 17:12:39 json_config -- paths/export.sh@5 -- # export PATH 00:11:53.391 17:12:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@47 -- # : 0 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:53.391 17:12:39 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:53.391 17:12:39 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:53.391 17:12:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:11:53.391 17:12:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:11:53.391 17:12:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:11:53.391 17:12:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:11:53.391 WARNING: No tests are enabled so not running JSON configuration tests 00:11:53.391 17:12:39 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:11:53.391 17:12:39 json_config -- json_config/json_config.sh@28 -- # exit 0 00:11:53.391 00:11:53.391 real 0m0.074s 00:11:53.391 user 0m0.026s 00:11:53.391 sys 0m0.049s 00:11:53.391 17:12:39 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.391 17:12:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:53.391 ************************************ 00:11:53.391 END TEST json_config 00:11:53.391 ************************************ 00:11:53.391 17:12:39 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:53.391 17:12:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:53.391 17:12:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.391 17:12:39 -- common/autotest_common.sh@10 -- # set +x 00:11:53.391 ************************************ 00:11:53.391 START TEST json_config_extra_key 00:11:53.391 ************************************ 00:11:53.391 17:12:39 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:53.391 17:12:39 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:53.391 17:12:39 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:11:53.391 17:12:39 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.391 17:12:39 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.391 17:12:39 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.391 17:12:39 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.391 17:12:39 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.391 17:12:39 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.391 17:12:39 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.391 17:12:39 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.391 17:12:39 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.391 17:12:39 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.391 17:12:39 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7fc73e2f-e911-44c0-81cd-f23b85a0dd5d 00:11:53.391 17:12:39 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7fc73e2f-e911-44c0-81cd-f23b85a0dd5d 00:11:53.391 17:12:39 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.391 17:12:39 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.391 17:12:39 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:53.391 17:12:39 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.391 17:12:39 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:53.391 17:12:39 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.391 17:12:39 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.391 17:12:39 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.391 17:12:39 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.391 17:12:39 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.650 17:12:39 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.650 17:12:39 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:11:53.650 17:12:39 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.650 17:12:39 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:11:53.650 17:12:39 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:53.650 17:12:39 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:53.650 17:12:39 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.650 17:12:39 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.650 17:12:39 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.650 17:12:39 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:53.650 17:12:39 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:53.650 17:12:39 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:53.650 17:12:39 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:53.650 17:12:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:11:53.650 17:12:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:11:53.650 17:12:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:11:53.650 17:12:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:11:53.650 17:12:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:11:53.650 17:12:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:11:53.650 17:12:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:11:53.650 17:12:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:11:53.650 17:12:39 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:53.650 INFO: launching applications... 00:11:53.650 17:12:39 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:11:53.650 17:12:39 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:53.650 17:12:39 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:11:53.650 17:12:39 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:11:53.650 17:12:39 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:53.650 17:12:39 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:53.650 17:12:39 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:11:53.650 17:12:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:53.651 17:12:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:53.651 17:12:39 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=62707 00:11:53.651 Waiting for target to run... 00:11:53.651 17:12:39 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:53.651 17:12:39 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 62707 /var/tmp/spdk_tgt.sock 00:11:53.651 17:12:39 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 62707 ']' 00:11:53.651 17:12:39 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:53.651 17:12:39 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:53.651 17:12:39 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:53.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:53.651 17:12:39 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:53.651 17:12:39 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:53.651 17:12:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:53.651 [2024-07-24 17:12:39.760252] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:11:53.651 [2024-07-24 17:12:39.760458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62707 ] 00:11:54.217 [2024-07-24 17:12:40.232454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.217 [2024-07-24 17:12:40.444585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.150 17:12:41 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:55.150 17:12:41 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:11:55.150 00:11:55.150 17:12:41 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:11:55.150 INFO: shutting down applications... 00:11:55.150 17:12:41 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:11:55.150 17:12:41 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:11:55.150 17:12:41 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:11:55.150 17:12:41 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:55.150 17:12:41 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 62707 ]] 00:11:55.150 17:12:41 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 62707 00:11:55.150 17:12:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:55.150 17:12:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:55.150 17:12:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62707 00:11:55.150 17:12:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:55.407 17:12:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:55.407 17:12:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:55.407 17:12:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62707 00:11:55.407 17:12:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:55.972 17:12:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:55.972 17:12:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:55.972 17:12:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62707 00:11:55.972 17:12:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:56.537 17:12:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:56.537 17:12:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:56.537 17:12:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62707 00:11:56.537 17:12:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:57.103 17:12:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:57.103 17:12:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:57.103 17:12:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62707 00:11:57.103 17:12:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:57.666 17:12:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:57.666 17:12:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:57.666 17:12:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62707 00:11:57.666 17:12:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:57.923 17:12:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:57.923 17:12:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:57.923 17:12:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62707 00:11:57.923 17:12:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:57.923 17:12:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:11:57.923 SPDK target shutdown done 00:11:57.923 17:12:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:57.923 17:12:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:57.923 Success 00:11:57.923 17:12:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:11:57.923 00:11:57.923 real 0m4.549s 00:11:57.923 user 0m3.828s 00:11:57.923 sys 0m0.625s 00:11:57.923 17:12:44 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.923 17:12:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:57.923 ************************************ 00:11:57.923 END TEST json_config_extra_key 00:11:57.923 ************************************ 00:11:57.923 17:12:44 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:57.923 17:12:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:57.923 17:12:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.923 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:11:58.181 ************************************ 00:11:58.181 START TEST alias_rpc 00:11:58.181 ************************************ 00:11:58.181 17:12:44 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:58.181 * Looking for test storage... 00:11:58.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:11:58.181 17:12:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:58.181 17:12:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=62811 00:11:58.181 17:12:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:58.181 17:12:44 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 62811 00:11:58.181 17:12:44 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 62811 ']' 00:11:58.181 17:12:44 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.181 17:12:44 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:58.181 17:12:44 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.181 17:12:44 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:58.181 17:12:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.181 [2024-07-24 17:12:44.344238] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:11:58.181 [2024-07-24 17:12:44.344399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62811 ] 00:11:58.438 [2024-07-24 17:12:44.505511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.695 [2024-07-24 17:12:44.740487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.624 17:12:45 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:59.624 17:12:45 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:59.624 17:12:45 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:11:59.881 17:12:45 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 62811 00:11:59.881 17:12:45 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 62811 ']' 00:11:59.881 17:12:45 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 62811 00:11:59.881 17:12:45 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:11:59.881 17:12:45 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:59.881 17:12:45 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62811 00:11:59.881 17:12:45 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:59.881 killing process with pid 62811 00:11:59.881 17:12:45 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:59.881 17:12:45 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62811' 00:11:59.881 17:12:45 alias_rpc -- common/autotest_common.sh@969 -- # kill 62811 00:11:59.881 17:12:45 alias_rpc -- common/autotest_common.sh@974 -- # wait 62811 00:12:02.406 00:12:02.406 real 0m3.973s 00:12:02.406 user 0m4.124s 00:12:02.406 sys 0m0.593s 00:12:02.406 17:12:48 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:02.406 17:12:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.406 ************************************ 00:12:02.406 END TEST alias_rpc 00:12:02.406 ************************************ 00:12:02.406 17:12:48 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:12:02.406 17:12:48 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:12:02.406 17:12:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:02.406 17:12:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:02.406 17:12:48 -- common/autotest_common.sh@10 -- # set +x 00:12:02.406 ************************************ 00:12:02.406 START TEST spdkcli_tcp 00:12:02.406 ************************************ 00:12:02.406 17:12:48 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:12:02.406 * Looking for test storage... 00:12:02.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:12:02.406 17:12:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:12:02.406 17:12:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:12:02.406 17:12:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:12:02.406 17:12:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:12:02.406 17:12:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:12:02.406 17:12:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:02.406 17:12:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:12:02.406 17:12:48 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:02.406 17:12:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:02.406 17:12:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=62910 00:12:02.406 17:12:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:12:02.406 17:12:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 62910 00:12:02.406 17:12:48 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 62910 ']' 00:12:02.406 17:12:48 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.406 17:12:48 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:02.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.406 17:12:48 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.406 17:12:48 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:02.406 17:12:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:02.406 [2024-07-24 17:12:48.395957] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:02.406 [2024-07-24 17:12:48.396152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62910 ] 00:12:02.406 [2024-07-24 17:12:48.563133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:02.664 [2024-07-24 17:12:48.800340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.664 [2024-07-24 17:12:48.800351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.598 17:12:49 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:03.598 17:12:49 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:12:03.598 17:12:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=62927 00:12:03.598 17:12:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:12:03.598 17:12:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:12:03.855 [ 00:12:03.855 "bdev_malloc_delete", 00:12:03.855 "bdev_malloc_create", 00:12:03.855 "bdev_null_resize", 00:12:03.855 "bdev_null_delete", 00:12:03.855 "bdev_null_create", 00:12:03.855 "bdev_nvme_cuse_unregister", 00:12:03.855 "bdev_nvme_cuse_register", 00:12:03.855 "bdev_opal_new_user", 00:12:03.855 "bdev_opal_set_lock_state", 00:12:03.855 "bdev_opal_delete", 00:12:03.855 "bdev_opal_get_info", 00:12:03.855 "bdev_opal_create", 00:12:03.855 "bdev_nvme_opal_revert", 00:12:03.855 "bdev_nvme_opal_init", 00:12:03.855 "bdev_nvme_send_cmd", 00:12:03.855 "bdev_nvme_get_path_iostat", 00:12:03.855 "bdev_nvme_get_mdns_discovery_info", 00:12:03.855 "bdev_nvme_stop_mdns_discovery", 00:12:03.855 "bdev_nvme_start_mdns_discovery", 00:12:03.855 "bdev_nvme_set_multipath_policy", 00:12:03.855 "bdev_nvme_set_preferred_path", 00:12:03.855 "bdev_nvme_get_io_paths", 00:12:03.855 "bdev_nvme_remove_error_injection", 00:12:03.855 "bdev_nvme_add_error_injection", 00:12:03.855 "bdev_nvme_get_discovery_info", 00:12:03.855 "bdev_nvme_stop_discovery", 00:12:03.855 "bdev_nvme_start_discovery", 00:12:03.855 "bdev_nvme_get_controller_health_info", 00:12:03.855 "bdev_nvme_disable_controller", 00:12:03.855 "bdev_nvme_enable_controller", 00:12:03.855 "bdev_nvme_reset_controller", 00:12:03.855 "bdev_nvme_get_transport_statistics", 00:12:03.855 "bdev_nvme_apply_firmware", 00:12:03.855 "bdev_nvme_detach_controller", 00:12:03.856 "bdev_nvme_get_controllers", 00:12:03.856 "bdev_nvme_attach_controller", 00:12:03.856 "bdev_nvme_set_hotplug", 00:12:03.856 "bdev_nvme_set_options", 00:12:03.856 "bdev_passthru_delete", 00:12:03.856 "bdev_passthru_create", 00:12:03.856 "bdev_lvol_set_parent_bdev", 00:12:03.856 "bdev_lvol_set_parent", 00:12:03.856 "bdev_lvol_check_shallow_copy", 00:12:03.856 "bdev_lvol_start_shallow_copy", 00:12:03.856 "bdev_lvol_grow_lvstore", 00:12:03.856 "bdev_lvol_get_lvols", 00:12:03.856 "bdev_lvol_get_lvstores", 00:12:03.856 "bdev_lvol_delete", 00:12:03.856 "bdev_lvol_set_read_only", 00:12:03.856 "bdev_lvol_resize", 00:12:03.856 "bdev_lvol_decouple_parent", 00:12:03.856 "bdev_lvol_inflate", 00:12:03.856 "bdev_lvol_rename", 00:12:03.856 "bdev_lvol_clone_bdev", 00:12:03.856 "bdev_lvol_clone", 00:12:03.856 "bdev_lvol_snapshot", 00:12:03.856 "bdev_lvol_create", 00:12:03.856 "bdev_lvol_delete_lvstore", 00:12:03.856 "bdev_lvol_rename_lvstore", 00:12:03.856 "bdev_lvol_create_lvstore", 00:12:03.856 "bdev_raid_set_options", 00:12:03.856 "bdev_raid_remove_base_bdev", 00:12:03.856 "bdev_raid_add_base_bdev", 00:12:03.856 "bdev_raid_delete", 00:12:03.856 "bdev_raid_create", 00:12:03.856 "bdev_raid_get_bdevs", 00:12:03.856 "bdev_error_inject_error", 00:12:03.856 "bdev_error_delete", 00:12:03.856 "bdev_error_create", 00:12:03.856 "bdev_split_delete", 00:12:03.856 "bdev_split_create", 00:12:03.856 "bdev_delay_delete", 00:12:03.856 "bdev_delay_create", 00:12:03.856 "bdev_delay_update_latency", 00:12:03.856 "bdev_zone_block_delete", 00:12:03.856 "bdev_zone_block_create", 00:12:03.856 "blobfs_create", 00:12:03.856 "blobfs_detect", 00:12:03.856 "blobfs_set_cache_size", 00:12:03.856 "bdev_xnvme_delete", 00:12:03.856 "bdev_xnvme_create", 00:12:03.856 "bdev_aio_delete", 00:12:03.856 "bdev_aio_rescan", 00:12:03.856 "bdev_aio_create", 00:12:03.856 "bdev_ftl_set_property", 00:12:03.856 "bdev_ftl_get_properties", 00:12:03.856 "bdev_ftl_get_stats", 00:12:03.856 "bdev_ftl_unmap", 00:12:03.856 "bdev_ftl_unload", 00:12:03.856 "bdev_ftl_delete", 00:12:03.856 "bdev_ftl_load", 00:12:03.856 "bdev_ftl_create", 00:12:03.856 "bdev_virtio_attach_controller", 00:12:03.856 "bdev_virtio_scsi_get_devices", 00:12:03.856 "bdev_virtio_detach_controller", 00:12:03.856 "bdev_virtio_blk_set_hotplug", 00:12:03.856 "bdev_iscsi_delete", 00:12:03.856 "bdev_iscsi_create", 00:12:03.856 "bdev_iscsi_set_options", 00:12:03.856 "accel_error_inject_error", 00:12:03.856 "ioat_scan_accel_module", 00:12:03.856 "dsa_scan_accel_module", 00:12:03.856 "iaa_scan_accel_module", 00:12:03.856 "keyring_file_remove_key", 00:12:03.856 "keyring_file_add_key", 00:12:03.856 "keyring_linux_set_options", 00:12:03.856 "iscsi_get_histogram", 00:12:03.856 "iscsi_enable_histogram", 00:12:03.856 "iscsi_set_options", 00:12:03.856 "iscsi_get_auth_groups", 00:12:03.856 "iscsi_auth_group_remove_secret", 00:12:03.856 "iscsi_auth_group_add_secret", 00:12:03.856 "iscsi_delete_auth_group", 00:12:03.856 "iscsi_create_auth_group", 00:12:03.856 "iscsi_set_discovery_auth", 00:12:03.856 "iscsi_get_options", 00:12:03.856 "iscsi_target_node_request_logout", 00:12:03.856 "iscsi_target_node_set_redirect", 00:12:03.856 "iscsi_target_node_set_auth", 00:12:03.856 "iscsi_target_node_add_lun", 00:12:03.856 "iscsi_get_stats", 00:12:03.856 "iscsi_get_connections", 00:12:03.856 "iscsi_portal_group_set_auth", 00:12:03.856 "iscsi_start_portal_group", 00:12:03.856 "iscsi_delete_portal_group", 00:12:03.856 "iscsi_create_portal_group", 00:12:03.856 "iscsi_get_portal_groups", 00:12:03.856 "iscsi_delete_target_node", 00:12:03.856 "iscsi_target_node_remove_pg_ig_maps", 00:12:03.856 "iscsi_target_node_add_pg_ig_maps", 00:12:03.856 "iscsi_create_target_node", 00:12:03.856 "iscsi_get_target_nodes", 00:12:03.856 "iscsi_delete_initiator_group", 00:12:03.856 "iscsi_initiator_group_remove_initiators", 00:12:03.856 "iscsi_initiator_group_add_initiators", 00:12:03.856 "iscsi_create_initiator_group", 00:12:03.856 "iscsi_get_initiator_groups", 00:12:03.856 "nvmf_set_crdt", 00:12:03.856 "nvmf_set_config", 00:12:03.856 "nvmf_set_max_subsystems", 00:12:03.856 "nvmf_stop_mdns_prr", 00:12:03.856 "nvmf_publish_mdns_prr", 00:12:03.856 "nvmf_subsystem_get_listeners", 00:12:03.856 "nvmf_subsystem_get_qpairs", 00:12:03.856 "nvmf_subsystem_get_controllers", 00:12:03.856 "nvmf_get_stats", 00:12:03.856 "nvmf_get_transports", 00:12:03.856 "nvmf_create_transport", 00:12:03.856 "nvmf_get_targets", 00:12:03.856 "nvmf_delete_target", 00:12:03.856 "nvmf_create_target", 00:12:03.856 "nvmf_subsystem_allow_any_host", 00:12:03.856 "nvmf_subsystem_remove_host", 00:12:03.856 "nvmf_subsystem_add_host", 00:12:03.856 "nvmf_ns_remove_host", 00:12:03.856 "nvmf_ns_add_host", 00:12:03.856 "nvmf_subsystem_remove_ns", 00:12:03.856 "nvmf_subsystem_add_ns", 00:12:03.856 "nvmf_subsystem_listener_set_ana_state", 00:12:03.856 "nvmf_discovery_get_referrals", 00:12:03.856 "nvmf_discovery_remove_referral", 00:12:03.856 "nvmf_discovery_add_referral", 00:12:03.856 "nvmf_subsystem_remove_listener", 00:12:03.856 "nvmf_subsystem_add_listener", 00:12:03.856 "nvmf_delete_subsystem", 00:12:03.856 "nvmf_create_subsystem", 00:12:03.856 "nvmf_get_subsystems", 00:12:03.856 "env_dpdk_get_mem_stats", 00:12:03.856 "nbd_get_disks", 00:12:03.856 "nbd_stop_disk", 00:12:03.856 "nbd_start_disk", 00:12:03.856 "ublk_recover_disk", 00:12:03.856 "ublk_get_disks", 00:12:03.856 "ublk_stop_disk", 00:12:03.856 "ublk_start_disk", 00:12:03.856 "ublk_destroy_target", 00:12:03.856 "ublk_create_target", 00:12:03.856 "virtio_blk_create_transport", 00:12:03.856 "virtio_blk_get_transports", 00:12:03.856 "vhost_controller_set_coalescing", 00:12:03.856 "vhost_get_controllers", 00:12:03.856 "vhost_delete_controller", 00:12:03.856 "vhost_create_blk_controller", 00:12:03.856 "vhost_scsi_controller_remove_target", 00:12:03.856 "vhost_scsi_controller_add_target", 00:12:03.856 "vhost_start_scsi_controller", 00:12:03.856 "vhost_create_scsi_controller", 00:12:03.856 "thread_set_cpumask", 00:12:03.856 "framework_get_governor", 00:12:03.856 "framework_get_scheduler", 00:12:03.856 "framework_set_scheduler", 00:12:03.856 "framework_get_reactors", 00:12:03.856 "thread_get_io_channels", 00:12:03.856 "thread_get_pollers", 00:12:03.856 "thread_get_stats", 00:12:03.856 "framework_monitor_context_switch", 00:12:03.856 "spdk_kill_instance", 00:12:03.856 "log_enable_timestamps", 00:12:03.856 "log_get_flags", 00:12:03.856 "log_clear_flag", 00:12:03.856 "log_set_flag", 00:12:03.856 "log_get_level", 00:12:03.856 "log_set_level", 00:12:03.856 "log_get_print_level", 00:12:03.856 "log_set_print_level", 00:12:03.856 "framework_enable_cpumask_locks", 00:12:03.856 "framework_disable_cpumask_locks", 00:12:03.856 "framework_wait_init", 00:12:03.856 "framework_start_init", 00:12:03.856 "scsi_get_devices", 00:12:03.856 "bdev_get_histogram", 00:12:03.856 "bdev_enable_histogram", 00:12:03.856 "bdev_set_qos_limit", 00:12:03.856 "bdev_set_qd_sampling_period", 00:12:03.856 "bdev_get_bdevs", 00:12:03.856 "bdev_reset_iostat", 00:12:03.856 "bdev_get_iostat", 00:12:03.856 "bdev_examine", 00:12:03.856 "bdev_wait_for_examine", 00:12:03.856 "bdev_set_options", 00:12:03.856 "notify_get_notifications", 00:12:03.856 "notify_get_types", 00:12:03.856 "accel_get_stats", 00:12:03.856 "accel_set_options", 00:12:03.856 "accel_set_driver", 00:12:03.856 "accel_crypto_key_destroy", 00:12:03.856 "accel_crypto_keys_get", 00:12:03.856 "accel_crypto_key_create", 00:12:03.856 "accel_assign_opc", 00:12:03.856 "accel_get_module_info", 00:12:03.856 "accel_get_opc_assignments", 00:12:03.856 "vmd_rescan", 00:12:03.856 "vmd_remove_device", 00:12:03.856 "vmd_enable", 00:12:03.856 "sock_get_default_impl", 00:12:03.856 "sock_set_default_impl", 00:12:03.856 "sock_impl_set_options", 00:12:03.856 "sock_impl_get_options", 00:12:03.856 "iobuf_get_stats", 00:12:03.856 "iobuf_set_options", 00:12:03.856 "framework_get_pci_devices", 00:12:03.856 "framework_get_config", 00:12:03.856 "framework_get_subsystems", 00:12:03.856 "trace_get_info", 00:12:03.856 "trace_get_tpoint_group_mask", 00:12:03.856 "trace_disable_tpoint_group", 00:12:03.856 "trace_enable_tpoint_group", 00:12:03.856 "trace_clear_tpoint_mask", 00:12:03.856 "trace_set_tpoint_mask", 00:12:03.856 "keyring_get_keys", 00:12:03.856 "spdk_get_version", 00:12:03.856 "rpc_get_methods" 00:12:03.856 ] 00:12:03.856 17:12:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:12:03.856 17:12:49 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:03.856 17:12:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:03.856 17:12:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:03.856 17:12:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 62910 00:12:03.856 17:12:49 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 62910 ']' 00:12:03.856 17:12:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 62910 00:12:03.856 17:12:49 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:12:03.856 17:12:49 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:03.856 17:12:49 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62910 00:12:03.857 killing process with pid 62910 00:12:03.857 17:12:49 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:03.857 17:12:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:03.857 17:12:49 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62910' 00:12:03.857 17:12:49 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 62910 00:12:03.857 17:12:49 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 62910 00:12:06.444 00:12:06.444 real 0m3.895s 00:12:06.444 user 0m6.798s 00:12:06.444 sys 0m0.595s 00:12:06.444 17:12:52 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:06.444 ************************************ 00:12:06.444 END TEST spdkcli_tcp 00:12:06.444 ************************************ 00:12:06.444 17:12:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:06.444 17:12:52 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:06.444 17:12:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:06.444 17:12:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:06.444 17:12:52 -- common/autotest_common.sh@10 -- # set +x 00:12:06.444 ************************************ 00:12:06.444 START TEST dpdk_mem_utility 00:12:06.444 ************************************ 00:12:06.444 17:12:52 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:06.444 * Looking for test storage... 00:12:06.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:12:06.444 17:12:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:12:06.444 17:12:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=63024 00:12:06.444 17:12:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 63024 00:12:06.444 17:12:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:06.444 17:12:52 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 63024 ']' 00:12:06.444 17:12:52 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.444 17:12:52 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:06.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.444 17:12:52 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.444 17:12:52 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:06.444 17:12:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:06.444 [2024-07-24 17:12:52.345988] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:06.444 [2024-07-24 17:12:52.346167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63024 ] 00:12:06.444 [2024-07-24 17:12:52.518007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.702 [2024-07-24 17:12:52.749256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.271 17:12:53 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:07.271 17:12:53 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:12:07.271 17:12:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:12:07.271 17:12:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:12:07.271 17:12:53 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.271 17:12:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:07.530 { 00:12:07.530 "filename": "/tmp/spdk_mem_dump.txt" 00:12:07.530 } 00:12:07.530 17:12:53 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.530 17:12:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:12:07.530 DPDK memory size 820.000000 MiB in 1 heap(s) 00:12:07.530 1 heaps totaling size 820.000000 MiB 00:12:07.530 size: 820.000000 MiB heap id: 0 00:12:07.530 end heaps---------- 00:12:07.530 8 mempools totaling size 598.116089 MiB 00:12:07.530 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:12:07.530 size: 158.602051 MiB name: PDU_data_out_Pool 00:12:07.530 size: 84.521057 MiB name: bdev_io_63024 00:12:07.530 size: 51.011292 MiB name: evtpool_63024 00:12:07.530 size: 50.003479 MiB name: msgpool_63024 00:12:07.530 size: 21.763794 MiB name: PDU_Pool 00:12:07.530 size: 19.513306 MiB name: SCSI_TASK_Pool 00:12:07.530 size: 0.026123 MiB name: Session_Pool 00:12:07.530 end mempools------- 00:12:07.530 6 memzones totaling size 4.142822 MiB 00:12:07.530 size: 1.000366 MiB name: RG_ring_0_63024 00:12:07.530 size: 1.000366 MiB name: RG_ring_1_63024 00:12:07.530 size: 1.000366 MiB name: RG_ring_4_63024 00:12:07.530 size: 1.000366 MiB name: RG_ring_5_63024 00:12:07.530 size: 0.125366 MiB name: RG_ring_2_63024 00:12:07.530 size: 0.015991 MiB name: RG_ring_3_63024 00:12:07.530 end memzones------- 00:12:07.530 17:12:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:12:07.530 heap id: 0 total size: 820.000000 MiB number of busy elements: 298 number of free elements: 18 00:12:07.530 list of free elements. size: 18.452026 MiB 00:12:07.530 element at address: 0x200000400000 with size: 1.999451 MiB 00:12:07.530 element at address: 0x200000800000 with size: 1.996887 MiB 00:12:07.530 element at address: 0x200007000000 with size: 1.995972 MiB 00:12:07.530 element at address: 0x20000b200000 with size: 1.995972 MiB 00:12:07.530 element at address: 0x200019100040 with size: 0.999939 MiB 00:12:07.530 element at address: 0x200019500040 with size: 0.999939 MiB 00:12:07.530 element at address: 0x200019600000 with size: 0.999084 MiB 00:12:07.530 element at address: 0x200003e00000 with size: 0.996094 MiB 00:12:07.530 element at address: 0x200032200000 with size: 0.994324 MiB 00:12:07.530 element at address: 0x200018e00000 with size: 0.959656 MiB 00:12:07.530 element at address: 0x200019900040 with size: 0.936401 MiB 00:12:07.530 element at address: 0x200000200000 with size: 0.830200 MiB 00:12:07.530 element at address: 0x20001b000000 with size: 0.564636 MiB 00:12:07.530 element at address: 0x200019200000 with size: 0.487976 MiB 00:12:07.530 element at address: 0x200019a00000 with size: 0.485413 MiB 00:12:07.530 element at address: 0x200013800000 with size: 0.467651 MiB 00:12:07.530 element at address: 0x200028400000 with size: 0.390442 MiB 00:12:07.530 element at address: 0x200003a00000 with size: 0.351990 MiB 00:12:07.530 list of standard malloc elements. size: 199.283569 MiB 00:12:07.530 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:12:07.530 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:12:07.530 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:12:07.530 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:12:07.531 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:12:07.531 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:12:07.531 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:12:07.531 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:12:07.531 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:12:07.531 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:12:07.531 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:12:07.531 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003aff980 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003affa80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200003eff000 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200013877b80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200013877c80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200013877d80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200013877e80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200013877f80 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200013878080 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200013878180 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200013878280 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200013878380 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200013878480 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200013878580 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x200019abc680 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:12:07.531 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:12:07.532 element at address: 0x200028463f40 with size: 0.000244 MiB 00:12:07.532 element at address: 0x200028464040 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846af80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846b080 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846b180 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846b280 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846b380 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846b480 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846b580 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846b680 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846b780 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846b880 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846b980 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846be80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846c080 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846c180 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846c280 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846c380 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846c480 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846c580 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846c680 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846c780 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846c880 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846c980 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846d080 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846d180 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846d280 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846d380 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846d480 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846d580 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846d680 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846d780 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846d880 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846d980 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846da80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846db80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846de80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846df80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846e080 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846e180 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846e280 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846e380 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846e480 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846e580 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846e680 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846e780 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846e880 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846e980 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846f080 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846f180 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846f280 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846f380 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846f480 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846f580 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846f680 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846f780 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846f880 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846f980 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:12:07.532 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:12:07.532 list of memzone associated elements. size: 602.264404 MiB 00:12:07.532 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:12:07.532 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:12:07.532 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:12:07.532 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:12:07.532 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:12:07.532 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_63024_0 00:12:07.532 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:12:07.533 associated memzone info: size: 48.002930 MiB name: MP_evtpool_63024_0 00:12:07.533 element at address: 0x200003fff340 with size: 48.003113 MiB 00:12:07.533 associated memzone info: size: 48.002930 MiB name: MP_msgpool_63024_0 00:12:07.533 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:12:07.533 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:12:07.533 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:12:07.533 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:12:07.533 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:12:07.533 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_63024 00:12:07.533 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:12:07.533 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_63024 00:12:07.533 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:12:07.533 associated memzone info: size: 1.007996 MiB name: MP_evtpool_63024 00:12:07.533 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:12:07.533 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:12:07.533 element at address: 0x200019abc780 with size: 1.008179 MiB 00:12:07.533 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:12:07.533 element at address: 0x200018efde00 with size: 1.008179 MiB 00:12:07.533 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:12:07.533 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:12:07.533 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:12:07.533 element at address: 0x200003eff100 with size: 1.000549 MiB 00:12:07.533 associated memzone info: size: 1.000366 MiB name: RG_ring_0_63024 00:12:07.533 element at address: 0x200003affb80 with size: 1.000549 MiB 00:12:07.533 associated memzone info: size: 1.000366 MiB name: RG_ring_1_63024 00:12:07.533 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:12:07.533 associated memzone info: size: 1.000366 MiB name: RG_ring_4_63024 00:12:07.533 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:12:07.533 associated memzone info: size: 1.000366 MiB name: RG_ring_5_63024 00:12:07.533 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:12:07.533 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_63024 00:12:07.533 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:12:07.533 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:12:07.533 element at address: 0x200013878680 with size: 0.500549 MiB 00:12:07.533 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:12:07.533 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:12:07.533 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:12:07.533 element at address: 0x200003adf740 with size: 0.125549 MiB 00:12:07.533 associated memzone info: size: 0.125366 MiB name: RG_ring_2_63024 00:12:07.533 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:12:07.533 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:12:07.533 element at address: 0x200028464140 with size: 0.023804 MiB 00:12:07.533 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:12:07.533 element at address: 0x200003adb500 with size: 0.016174 MiB 00:12:07.533 associated memzone info: size: 0.015991 MiB name: RG_ring_3_63024 00:12:07.533 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:12:07.533 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:12:07.533 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:12:07.533 associated memzone info: size: 0.000183 MiB name: MP_msgpool_63024 00:12:07.533 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:12:07.533 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_63024 00:12:07.533 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:12:07.533 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:12:07.533 17:12:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:12:07.533 17:12:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 63024 00:12:07.533 17:12:53 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 63024 ']' 00:12:07.533 17:12:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 63024 00:12:07.533 17:12:53 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:12:07.533 17:12:53 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:07.533 17:12:53 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63024 00:12:07.533 17:12:53 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:07.533 17:12:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:07.533 killing process with pid 63024 00:12:07.533 17:12:53 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63024' 00:12:07.533 17:12:53 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 63024 00:12:07.533 17:12:53 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 63024 00:12:10.063 00:12:10.063 real 0m3.740s 00:12:10.063 user 0m3.696s 00:12:10.063 sys 0m0.602s 00:12:10.063 17:12:55 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:10.063 17:12:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:10.063 ************************************ 00:12:10.063 END TEST dpdk_mem_utility 00:12:10.063 ************************************ 00:12:10.063 17:12:55 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:12:10.063 17:12:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:10.063 17:12:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:10.063 17:12:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.063 ************************************ 00:12:10.063 START TEST event 00:12:10.063 ************************************ 00:12:10.063 17:12:55 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:12:10.063 * Looking for test storage... 00:12:10.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:10.063 17:12:55 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:10.063 17:12:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:12:10.063 17:12:56 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:10.063 17:12:56 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:12:10.063 17:12:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:10.063 17:12:56 event -- common/autotest_common.sh@10 -- # set +x 00:12:10.063 ************************************ 00:12:10.063 START TEST event_perf 00:12:10.063 ************************************ 00:12:10.063 17:12:56 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:10.063 Running I/O for 1 seconds...[2024-07-24 17:12:56.047198] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:10.063 [2024-07-24 17:12:56.047360] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63118 ] 00:12:10.063 [2024-07-24 17:12:56.213094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.321 [2024-07-24 17:12:56.453487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.321 [2024-07-24 17:12:56.453557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.321 Running I/O for 1 seconds...[2024-07-24 17:12:56.453667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.321 [2024-07-24 17:12:56.453830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.695 00:12:11.695 lcore 0: 188430 00:12:11.695 lcore 1: 188428 00:12:11.695 lcore 2: 188430 00:12:11.695 lcore 3: 188431 00:12:11.695 done. 00:12:11.695 00:12:11.695 real 0m1.887s 00:12:11.695 user 0m4.626s 00:12:11.695 sys 0m0.133s 00:12:11.695 17:12:57 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.695 17:12:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:12:11.695 ************************************ 00:12:11.695 END TEST event_perf 00:12:11.695 ************************************ 00:12:11.953 17:12:57 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:12:11.953 17:12:57 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:11.953 17:12:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:11.953 17:12:57 event -- common/autotest_common.sh@10 -- # set +x 00:12:11.953 ************************************ 00:12:11.953 START TEST event_reactor 00:12:11.953 ************************************ 00:12:11.953 17:12:57 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:12:11.953 [2024-07-24 17:12:57.986643] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:11.953 [2024-07-24 17:12:57.986974] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63158 ] 00:12:11.953 [2024-07-24 17:12:58.150551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.211 [2024-07-24 17:12:58.390251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.585 test_start 00:12:13.585 oneshot 00:12:13.585 tick 100 00:12:13.585 tick 100 00:12:13.585 tick 250 00:12:13.585 tick 100 00:12:13.585 tick 100 00:12:13.585 tick 100 00:12:13.585 tick 250 00:12:13.585 tick 500 00:12:13.585 tick 100 00:12:13.585 tick 100 00:12:13.585 tick 250 00:12:13.585 tick 100 00:12:13.585 tick 100 00:12:13.585 test_end 00:12:13.585 00:12:13.585 real 0m1.835s 00:12:13.585 user 0m1.624s 00:12:13.585 sys 0m0.100s 00:12:13.585 ************************************ 00:12:13.585 END TEST event_reactor 00:12:13.585 ************************************ 00:12:13.585 17:12:59 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:13.585 17:12:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:12:13.843 17:12:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:13.843 17:12:59 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:13.843 17:12:59 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:13.843 17:12:59 event -- common/autotest_common.sh@10 -- # set +x 00:12:13.843 ************************************ 00:12:13.843 START TEST event_reactor_perf 00:12:13.843 ************************************ 00:12:13.843 17:12:59 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:13.843 [2024-07-24 17:12:59.877096] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:13.843 [2024-07-24 17:12:59.877283] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63200 ] 00:12:13.843 [2024-07-24 17:13:00.051570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.101 [2024-07-24 17:13:00.278417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.475 test_start 00:12:15.475 test_end 00:12:15.475 Performance: 295697 events per second 00:12:15.475 ************************************ 00:12:15.475 END TEST event_reactor_perf 00:12:15.475 00:12:15.475 real 0m1.834s 00:12:15.475 user 0m1.599s 00:12:15.475 sys 0m0.126s 00:12:15.475 17:13:01 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.475 17:13:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:12:15.475 ************************************ 00:12:15.734 17:13:01 event -- event/event.sh@49 -- # uname -s 00:12:15.734 17:13:01 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:12:15.734 17:13:01 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:12:15.734 17:13:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:15.734 17:13:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.734 17:13:01 event -- common/autotest_common.sh@10 -- # set +x 00:12:15.734 ************************************ 00:12:15.734 START TEST event_scheduler 00:12:15.734 ************************************ 00:12:15.734 17:13:01 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:12:15.734 * Looking for test storage... 00:12:15.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:12:15.734 17:13:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:12:15.734 17:13:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63268 00:12:15.734 17:13:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:12:15.734 17:13:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:12:15.734 17:13:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63268 00:12:15.734 17:13:01 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 63268 ']' 00:12:15.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.734 17:13:01 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.734 17:13:01 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:15.734 17:13:01 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.734 17:13:01 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:15.734 17:13:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:15.734 [2024-07-24 17:13:01.918740] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:15.734 [2024-07-24 17:13:01.918943] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63268 ] 00:12:15.992 [2024-07-24 17:13:02.091937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.250 [2024-07-24 17:13:02.337269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.250 [2024-07-24 17:13:02.337415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.250 [2024-07-24 17:13:02.337464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.250 [2024-07-24 17:13:02.337484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.816 17:13:02 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:16.816 17:13:02 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:12:16.816 17:13:02 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:12:16.816 17:13:02 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.816 17:13:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:16.816 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:16.816 POWER: Cannot set governor of lcore 0 to userspace 00:12:16.816 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:16.816 POWER: Cannot set governor of lcore 0 to performance 00:12:16.816 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:16.816 POWER: Cannot set governor of lcore 0 to userspace 00:12:16.816 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:16.816 POWER: Cannot set governor of lcore 0 to userspace 00:12:16.816 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:12:16.816 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:12:16.816 POWER: Unable to set Power Management Environment for lcore 0 00:12:16.816 [2024-07-24 17:13:02.869078] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:12:16.816 [2024-07-24 17:13:02.869218] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:12:16.816 [2024-07-24 17:13:02.869355] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:12:16.816 [2024-07-24 17:13:02.869508] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:12:16.816 [2024-07-24 17:13:02.869633] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:12:16.816 [2024-07-24 17:13:02.869787] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:12:16.816 17:13:02 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.816 17:13:02 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:12:16.816 17:13:02 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.816 17:13:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:17.074 [2024-07-24 17:13:03.177281] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:12:17.074 17:13:03 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.074 17:13:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:12:17.074 17:13:03 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:17.074 17:13:03 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.074 17:13:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:17.074 ************************************ 00:12:17.074 START TEST scheduler_create_thread 00:12:17.074 ************************************ 00:12:17.074 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:12:17.074 17:13:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:12:17.074 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.074 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:17.074 2 00:12:17.074 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.074 17:13:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:12:17.074 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.074 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:17.074 3 00:12:17.074 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:17.075 4 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:17.075 5 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:17.075 6 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:17.075 7 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:17.075 8 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:17.075 9 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:17.075 10 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.075 17:13:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:18.447 17:13:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.447 17:13:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:12:18.447 17:13:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:12:18.447 17:13:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.447 17:13:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:19.419 ************************************ 00:12:19.419 END TEST scheduler_create_thread 00:12:19.419 ************************************ 00:12:19.419 17:13:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.419 00:12:19.419 real 0m2.139s 00:12:19.419 user 0m0.019s 00:12:19.419 sys 0m0.003s 00:12:19.419 17:13:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:19.419 17:13:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:19.419 17:13:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:19.419 17:13:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63268 00:12:19.419 17:13:05 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 63268 ']' 00:12:19.419 17:13:05 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 63268 00:12:19.419 17:13:05 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:12:19.419 17:13:05 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:19.419 17:13:05 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63268 00:12:19.419 killing process with pid 63268 00:12:19.419 17:13:05 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:12:19.419 17:13:05 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:12:19.419 17:13:05 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63268' 00:12:19.419 17:13:05 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 63268 00:12:19.419 17:13:05 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 63268 00:12:19.676 [2024-07-24 17:13:05.810470] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:12:21.047 ************************************ 00:12:21.047 END TEST event_scheduler 00:12:21.047 ************************************ 00:12:21.047 00:12:21.047 real 0m5.271s 00:12:21.047 user 0m8.624s 00:12:21.047 sys 0m0.490s 00:12:21.047 17:13:06 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.047 17:13:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:21.047 17:13:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:12:21.047 17:13:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:12:21.047 17:13:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:21.047 17:13:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.047 17:13:07 event -- common/autotest_common.sh@10 -- # set +x 00:12:21.047 ************************************ 00:12:21.047 START TEST app_repeat 00:12:21.047 ************************************ 00:12:21.047 17:13:07 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:12:21.047 17:13:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:21.047 17:13:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:21.047 17:13:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:12:21.047 17:13:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:21.047 17:13:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:12:21.047 17:13:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:12:21.047 17:13:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:12:21.047 17:13:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63374 00:12:21.047 17:13:07 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:12:21.047 17:13:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:12:21.047 Process app_repeat pid: 63374 00:12:21.047 spdk_app_start Round 0 00:12:21.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:21.047 17:13:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63374' 00:12:21.047 17:13:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:21.047 17:13:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:12:21.047 17:13:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63374 /var/tmp/spdk-nbd.sock 00:12:21.047 17:13:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63374 ']' 00:12:21.047 17:13:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:21.047 17:13:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:21.047 17:13:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:21.047 17:13:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:21.047 17:13:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:21.047 [2024-07-24 17:13:07.115237] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:21.047 [2024-07-24 17:13:07.115588] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63374 ] 00:12:21.047 [2024-07-24 17:13:07.281931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:21.304 [2024-07-24 17:13:07.520803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.304 [2024-07-24 17:13:07.520815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.867 17:13:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:21.867 17:13:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:12:21.867 17:13:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:22.431 Malloc0 00:12:22.431 17:13:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:22.688 Malloc1 00:12:22.688 17:13:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:22.689 17:13:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:22.689 17:13:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:22.689 17:13:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:22.689 17:13:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:22.689 17:13:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:22.689 17:13:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:22.689 17:13:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:22.689 17:13:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:22.689 17:13:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:22.689 17:13:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:22.689 17:13:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:22.689 17:13:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:22.689 17:13:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:22.689 17:13:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:22.689 17:13:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:22.946 /dev/nbd0 00:12:22.946 17:13:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:22.946 17:13:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:22.946 17:13:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:22.946 17:13:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:12:22.946 17:13:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:22.946 17:13:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:22.946 17:13:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:22.946 17:13:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:12:22.946 17:13:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:22.946 17:13:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:22.946 17:13:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:22.946 1+0 records in 00:12:22.946 1+0 records out 00:12:22.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567697 s, 7.2 MB/s 00:12:22.946 17:13:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:22.946 17:13:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:12:22.946 17:13:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:22.946 17:13:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:22.946 17:13:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:12:22.946 17:13:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.946 17:13:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:22.946 17:13:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:23.204 /dev/nbd1 00:12:23.204 17:13:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:23.204 17:13:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:23.204 17:13:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:23.204 17:13:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:12:23.204 17:13:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:23.204 17:13:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:23.204 17:13:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:23.204 17:13:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:12:23.204 17:13:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:23.204 17:13:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:23.204 17:13:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:23.204 1+0 records in 00:12:23.204 1+0 records out 00:12:23.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330221 s, 12.4 MB/s 00:12:23.204 17:13:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:23.204 17:13:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:12:23.204 17:13:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:23.204 17:13:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:23.204 17:13:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:12:23.204 17:13:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:23.204 17:13:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:23.204 17:13:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:23.204 17:13:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:23.204 17:13:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:23.462 17:13:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:23.462 { 00:12:23.462 "nbd_device": "/dev/nbd0", 00:12:23.462 "bdev_name": "Malloc0" 00:12:23.462 }, 00:12:23.462 { 00:12:23.462 "nbd_device": "/dev/nbd1", 00:12:23.462 "bdev_name": "Malloc1" 00:12:23.462 } 00:12:23.462 ]' 00:12:23.462 17:13:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:23.462 { 00:12:23.462 "nbd_device": "/dev/nbd0", 00:12:23.462 "bdev_name": "Malloc0" 00:12:23.462 }, 00:12:23.462 { 00:12:23.462 "nbd_device": "/dev/nbd1", 00:12:23.462 "bdev_name": "Malloc1" 00:12:23.462 } 00:12:23.462 ]' 00:12:23.462 17:13:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:23.462 17:13:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:23.462 /dev/nbd1' 00:12:23.462 17:13:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:23.462 /dev/nbd1' 00:12:23.462 17:13:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:23.462 17:13:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:23.462 17:13:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:23.462 17:13:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:23.462 17:13:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:23.462 17:13:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:23.462 17:13:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:23.462 17:13:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:23.462 17:13:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:23.462 17:13:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:23.462 17:13:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:23.462 17:13:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:23.462 256+0 records in 00:12:23.462 256+0 records out 00:12:23.462 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106984 s, 98.0 MB/s 00:12:23.462 17:13:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:23.462 17:13:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:23.730 256+0 records in 00:12:23.730 256+0 records out 00:12:23.730 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248055 s, 42.3 MB/s 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:23.730 256+0 records in 00:12:23.730 256+0 records out 00:12:23.730 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315349 s, 33.3 MB/s 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.730 17:13:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:24.006 17:13:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:24.006 17:13:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:24.006 17:13:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:24.006 17:13:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.006 17:13:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.006 17:13:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:24.006 17:13:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:24.006 17:13:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.006 17:13:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.006 17:13:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:24.264 17:13:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:24.264 17:13:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:24.264 17:13:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:24.264 17:13:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.264 17:13:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.264 17:13:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:24.264 17:13:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:24.264 17:13:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.264 17:13:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:24.264 17:13:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:24.264 17:13:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:24.522 17:13:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:24.522 17:13:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:24.522 17:13:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:24.522 17:13:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:24.522 17:13:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:24.522 17:13:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:24.522 17:13:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:24.522 17:13:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:24.522 17:13:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:24.522 17:13:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:24.522 17:13:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:24.522 17:13:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:24.522 17:13:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:24.779 17:13:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:26.150 [2024-07-24 17:13:12.185750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:26.408 [2024-07-24 17:13:12.411253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.408 [2024-07-24 17:13:12.411263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.408 [2024-07-24 17:13:12.600435] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:26.408 [2024-07-24 17:13:12.600528] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:27.781 spdk_app_start Round 1 00:12:27.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:27.781 17:13:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:27.781 17:13:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:12:27.781 17:13:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63374 /var/tmp/spdk-nbd.sock 00:12:27.781 17:13:13 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63374 ']' 00:12:27.781 17:13:13 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:27.781 17:13:13 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:27.781 17:13:13 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:27.781 17:13:13 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:27.781 17:13:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:28.052 17:13:14 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:28.052 17:13:14 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:12:28.052 17:13:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:28.321 Malloc0 00:12:28.578 17:13:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:28.836 Malloc1 00:12:28.836 17:13:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:28.836 17:13:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.836 17:13:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:28.836 17:13:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:28.836 17:13:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:28.836 17:13:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:28.836 17:13:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:28.836 17:13:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.836 17:13:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:28.836 17:13:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:28.836 17:13:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:28.836 17:13:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:28.836 17:13:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:28.837 17:13:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:28.837 17:13:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:28.837 17:13:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:29.094 /dev/nbd0 00:12:29.094 17:13:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:29.094 17:13:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:29.094 17:13:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:29.094 17:13:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:12:29.094 17:13:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:29.094 17:13:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:29.094 17:13:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:29.094 17:13:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:12:29.094 17:13:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:29.094 17:13:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:29.094 17:13:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:29.094 1+0 records in 00:12:29.094 1+0 records out 00:12:29.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032154 s, 12.7 MB/s 00:12:29.094 17:13:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:29.094 17:13:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:12:29.094 17:13:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:29.094 17:13:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:29.095 17:13:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:12:29.095 17:13:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.095 17:13:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:29.095 17:13:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:29.352 /dev/nbd1 00:12:29.352 17:13:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:29.352 17:13:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:29.352 17:13:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:29.352 17:13:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:12:29.352 17:13:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:29.352 17:13:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:29.352 17:13:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:29.352 17:13:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:12:29.352 17:13:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:29.352 17:13:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:29.352 17:13:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:29.352 1+0 records in 00:12:29.352 1+0 records out 00:12:29.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254901 s, 16.1 MB/s 00:12:29.352 17:13:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:29.352 17:13:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:12:29.352 17:13:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:29.352 17:13:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:29.352 17:13:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:12:29.353 17:13:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.353 17:13:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:29.353 17:13:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:29.353 17:13:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:29.353 17:13:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:29.611 { 00:12:29.611 "nbd_device": "/dev/nbd0", 00:12:29.611 "bdev_name": "Malloc0" 00:12:29.611 }, 00:12:29.611 { 00:12:29.611 "nbd_device": "/dev/nbd1", 00:12:29.611 "bdev_name": "Malloc1" 00:12:29.611 } 00:12:29.611 ]' 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:29.611 { 00:12:29.611 "nbd_device": "/dev/nbd0", 00:12:29.611 "bdev_name": "Malloc0" 00:12:29.611 }, 00:12:29.611 { 00:12:29.611 "nbd_device": "/dev/nbd1", 00:12:29.611 "bdev_name": "Malloc1" 00:12:29.611 } 00:12:29.611 ]' 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:29.611 /dev/nbd1' 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:29.611 /dev/nbd1' 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:29.611 256+0 records in 00:12:29.611 256+0 records out 00:12:29.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00982486 s, 107 MB/s 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:29.611 256+0 records in 00:12:29.611 256+0 records out 00:12:29.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246647 s, 42.5 MB/s 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:29.611 256+0 records in 00:12:29.611 256+0 records out 00:12:29.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305953 s, 34.3 MB/s 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.611 17:13:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:29.869 17:13:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:29.869 17:13:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:29.869 17:13:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:29.869 17:13:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.869 17:13:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.869 17:13:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:29.869 17:13:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:29.869 17:13:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.869 17:13:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.869 17:13:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:30.127 17:13:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:30.127 17:13:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:30.127 17:13:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:30.127 17:13:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.127 17:13:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.127 17:13:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:30.127 17:13:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:30.127 17:13:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.127 17:13:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:30.127 17:13:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:30.127 17:13:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:30.385 17:13:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:30.385 17:13:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:30.385 17:13:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:30.642 17:13:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:30.642 17:13:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:30.642 17:13:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:30.642 17:13:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:30.642 17:13:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:30.642 17:13:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:30.642 17:13:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:30.642 17:13:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:30.642 17:13:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:30.642 17:13:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:30.900 17:13:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:32.271 [2024-07-24 17:13:18.273998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:32.271 [2024-07-24 17:13:18.494425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.271 [2024-07-24 17:13:18.494429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.528 [2024-07-24 17:13:18.671985] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:32.528 [2024-07-24 17:13:18.672108] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:33.897 spdk_app_start Round 2 00:12:33.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:33.897 17:13:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:33.897 17:13:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:12:33.897 17:13:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63374 /var/tmp/spdk-nbd.sock 00:12:33.897 17:13:20 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63374 ']' 00:12:33.897 17:13:20 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:33.897 17:13:20 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:33.897 17:13:20 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:33.897 17:13:20 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:33.897 17:13:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:34.154 17:13:20 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:34.154 17:13:20 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:12:34.154 17:13:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:34.718 Malloc0 00:12:34.718 17:13:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:34.718 Malloc1 00:12:34.976 17:13:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:34.976 17:13:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.976 17:13:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:34.976 17:13:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:34.976 17:13:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:34.976 17:13:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:34.976 17:13:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:34.976 17:13:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.976 17:13:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:34.976 17:13:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:34.976 17:13:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:34.976 17:13:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:34.976 17:13:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:34.976 17:13:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:34.976 17:13:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:34.976 17:13:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:34.976 /dev/nbd0 00:12:35.233 17:13:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:35.233 17:13:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:35.233 17:13:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:35.233 17:13:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:12:35.233 17:13:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:35.233 17:13:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:35.233 17:13:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:35.233 17:13:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:12:35.233 17:13:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:35.233 17:13:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:35.233 17:13:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:35.233 1+0 records in 00:12:35.233 1+0 records out 00:12:35.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517397 s, 7.9 MB/s 00:12:35.233 17:13:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:35.233 17:13:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:12:35.233 17:13:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:35.233 17:13:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:35.233 17:13:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:12:35.233 17:13:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.233 17:13:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:35.233 17:13:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:35.490 /dev/nbd1 00:12:35.490 17:13:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:35.490 17:13:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:35.490 17:13:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:35.490 17:13:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:12:35.490 17:13:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:35.490 17:13:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:35.490 17:13:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:35.490 17:13:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:12:35.490 17:13:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:35.490 17:13:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:35.490 17:13:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:35.490 1+0 records in 00:12:35.490 1+0 records out 00:12:35.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467738 s, 8.8 MB/s 00:12:35.490 17:13:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:35.490 17:13:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:12:35.490 17:13:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:35.490 17:13:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:35.490 17:13:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:12:35.490 17:13:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.490 17:13:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:35.490 17:13:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:35.490 17:13:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.490 17:13:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:35.748 { 00:12:35.748 "nbd_device": "/dev/nbd0", 00:12:35.748 "bdev_name": "Malloc0" 00:12:35.748 }, 00:12:35.748 { 00:12:35.748 "nbd_device": "/dev/nbd1", 00:12:35.748 "bdev_name": "Malloc1" 00:12:35.748 } 00:12:35.748 ]' 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:35.748 { 00:12:35.748 "nbd_device": "/dev/nbd0", 00:12:35.748 "bdev_name": "Malloc0" 00:12:35.748 }, 00:12:35.748 { 00:12:35.748 "nbd_device": "/dev/nbd1", 00:12:35.748 "bdev_name": "Malloc1" 00:12:35.748 } 00:12:35.748 ]' 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:35.748 /dev/nbd1' 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:35.748 /dev/nbd1' 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:35.748 256+0 records in 00:12:35.748 256+0 records out 00:12:35.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00695533 s, 151 MB/s 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:35.748 256+0 records in 00:12:35.748 256+0 records out 00:12:35.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262049 s, 40.0 MB/s 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:35.748 17:13:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:36.006 256+0 records in 00:12:36.006 256+0 records out 00:12:36.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029758 s, 35.2 MB/s 00:12:36.006 17:13:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:36.006 17:13:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:36.006 17:13:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:36.006 17:13:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:36.006 17:13:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:36.006 17:13:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:36.006 17:13:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:36.006 17:13:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:36.006 17:13:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:36.006 17:13:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:36.006 17:13:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:36.006 17:13:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:36.006 17:13:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:36.006 17:13:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.006 17:13:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:36.006 17:13:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:36.006 17:13:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:36.006 17:13:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.006 17:13:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:36.290 17:13:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:36.290 17:13:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:36.290 17:13:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:36.290 17:13:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.290 17:13:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.290 17:13:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:36.290 17:13:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:36.290 17:13:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.290 17:13:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.290 17:13:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:36.548 17:13:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:36.548 17:13:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:36.548 17:13:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:36.548 17:13:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.548 17:13:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.548 17:13:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:36.548 17:13:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:36.548 17:13:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.548 17:13:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:36.548 17:13:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.548 17:13:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:36.805 17:13:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:36.805 17:13:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:36.805 17:13:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:36.805 17:13:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:36.805 17:13:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:36.805 17:13:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:36.805 17:13:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:36.805 17:13:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:36.805 17:13:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:36.805 17:13:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:36.805 17:13:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:36.805 17:13:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:36.805 17:13:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:37.370 17:13:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:38.305 [2024-07-24 17:13:24.528357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:38.563 [2024-07-24 17:13:24.748079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.563 [2024-07-24 17:13:24.748089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.820 [2024-07-24 17:13:24.932949] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:38.820 [2024-07-24 17:13:24.933083] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:40.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:40.193 17:13:26 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63374 /var/tmp/spdk-nbd.sock 00:12:40.193 17:13:26 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63374 ']' 00:12:40.193 17:13:26 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:40.193 17:13:26 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:40.193 17:13:26 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:40.193 17:13:26 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:40.193 17:13:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:40.451 17:13:26 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:40.451 17:13:26 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:12:40.451 17:13:26 event.app_repeat -- event/event.sh@39 -- # killprocess 63374 00:12:40.451 17:13:26 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 63374 ']' 00:12:40.451 17:13:26 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 63374 00:12:40.451 17:13:26 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:12:40.451 17:13:26 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:40.451 17:13:26 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63374 00:12:40.451 killing process with pid 63374 00:12:40.451 17:13:26 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:40.451 17:13:26 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:40.451 17:13:26 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63374' 00:12:40.451 17:13:26 event.app_repeat -- common/autotest_common.sh@969 -- # kill 63374 00:12:40.451 17:13:26 event.app_repeat -- common/autotest_common.sh@974 -- # wait 63374 00:12:41.829 spdk_app_start is called in Round 0. 00:12:41.829 Shutdown signal received, stop current app iteration 00:12:41.829 Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 reinitialization... 00:12:41.829 spdk_app_start is called in Round 1. 00:12:41.829 Shutdown signal received, stop current app iteration 00:12:41.829 Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 reinitialization... 00:12:41.829 spdk_app_start is called in Round 2. 00:12:41.829 Shutdown signal received, stop current app iteration 00:12:41.829 Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 reinitialization... 00:12:41.829 spdk_app_start is called in Round 3. 00:12:41.829 Shutdown signal received, stop current app iteration 00:12:41.829 17:13:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:12:41.829 ************************************ 00:12:41.829 END TEST app_repeat 00:12:41.829 ************************************ 00:12:41.829 17:13:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:12:41.829 00:12:41.829 real 0m20.689s 00:12:41.829 user 0m44.176s 00:12:41.829 sys 0m2.972s 00:12:41.829 17:13:27 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:41.829 17:13:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:41.829 17:13:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:12:41.829 17:13:27 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:41.829 17:13:27 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:41.829 17:13:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:41.829 17:13:27 event -- common/autotest_common.sh@10 -- # set +x 00:12:41.829 ************************************ 00:12:41.829 START TEST cpu_locks 00:12:41.829 ************************************ 00:12:41.829 17:13:27 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:41.829 * Looking for test storage... 00:12:41.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:41.829 17:13:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:12:41.829 17:13:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:12:41.829 17:13:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:12:41.829 17:13:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:12:41.829 17:13:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:41.829 17:13:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:41.829 17:13:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:41.829 ************************************ 00:12:41.829 START TEST default_locks 00:12:41.829 ************************************ 00:12:41.829 17:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:12:41.829 17:13:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=63834 00:12:41.830 17:13:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:41.830 17:13:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 63834 00:12:41.830 17:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 63834 ']' 00:12:41.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.830 17:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.830 17:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:41.830 17:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.830 17:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:41.830 17:13:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:41.830 [2024-07-24 17:13:28.029228] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:41.830 [2024-07-24 17:13:28.029414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63834 ] 00:12:42.087 [2024-07-24 17:13:28.201633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.345 [2024-07-24 17:13:28.421136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.279 17:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:43.279 17:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:12:43.279 17:13:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 63834 00:12:43.279 17:13:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 63834 00:12:43.279 17:13:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:43.537 17:13:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 63834 00:12:43.537 17:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 63834 ']' 00:12:43.537 17:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 63834 00:12:43.537 17:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:12:43.537 17:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:43.537 17:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63834 00:12:43.537 killing process with pid 63834 00:12:43.537 17:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:43.537 17:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:43.537 17:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63834' 00:12:43.537 17:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 63834 00:12:43.537 17:13:29 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 63834 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 63834 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 63834 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 63834 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 63834 ']' 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.120 ERROR: process (pid: 63834) is no longer running 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:46.120 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (63834) - No such process 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:46.120 00:12:46.120 real 0m3.880s 00:12:46.120 user 0m3.836s 00:12:46.120 sys 0m0.677s 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:46.120 17:13:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:46.120 ************************************ 00:12:46.120 END TEST default_locks 00:12:46.120 ************************************ 00:12:46.120 17:13:31 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:12:46.120 17:13:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:46.120 17:13:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:46.120 17:13:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:46.120 ************************************ 00:12:46.120 START TEST default_locks_via_rpc 00:12:46.120 ************************************ 00:12:46.120 17:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:12:46.120 17:13:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=63905 00:12:46.120 17:13:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:46.120 17:13:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 63905 00:12:46.120 17:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63905 ']' 00:12:46.120 17:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.120 17:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:46.120 17:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.120 17:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:46.120 17:13:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.120 [2024-07-24 17:13:31.946058] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:46.120 [2024-07-24 17:13:31.946470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63905 ] 00:12:46.120 [2024-07-24 17:13:32.122782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.120 [2024-07-24 17:13:32.356214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.050 17:13:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:47.050 17:13:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:47.050 17:13:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:12:47.050 17:13:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.050 17:13:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.050 17:13:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.050 17:13:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:12:47.050 17:13:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:47.050 17:13:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:12:47.050 17:13:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:47.050 17:13:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:12:47.050 17:13:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.050 17:13:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.050 17:13:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.050 17:13:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 63905 00:12:47.050 17:13:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 63905 00:12:47.050 17:13:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:47.309 17:13:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 63905 00:12:47.309 17:13:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 63905 ']' 00:12:47.309 17:13:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 63905 00:12:47.309 17:13:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:12:47.309 17:13:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:47.309 17:13:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63905 00:12:47.309 killing process with pid 63905 00:12:47.309 17:13:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:47.309 17:13:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:47.309 17:13:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63905' 00:12:47.309 17:13:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 63905 00:12:47.309 17:13:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 63905 00:12:49.835 ************************************ 00:12:49.835 END TEST default_locks_via_rpc 00:12:49.835 ************************************ 00:12:49.835 00:12:49.835 real 0m3.888s 00:12:49.835 user 0m3.796s 00:12:49.835 sys 0m0.690s 00:12:49.835 17:13:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.835 17:13:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.835 17:13:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:12:49.835 17:13:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:49.835 17:13:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:49.835 17:13:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:49.835 ************************************ 00:12:49.835 START TEST non_locking_app_on_locked_coremask 00:12:49.835 ************************************ 00:12:49.835 17:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:12:49.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.835 17:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63973 00:12:49.835 17:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 63973 /var/tmp/spdk.sock 00:12:49.835 17:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:49.835 17:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63973 ']' 00:12:49.835 17:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.835 17:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:49.835 17:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.835 17:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:49.835 17:13:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:49.835 [2024-07-24 17:13:35.914428] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:49.835 [2024-07-24 17:13:35.914675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63973 ] 00:12:50.092 [2024-07-24 17:13:36.090372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.092 [2024-07-24 17:13:36.324175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:51.024 17:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:51.024 17:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:12:51.024 17:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63995 00:12:51.024 17:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63995 /var/tmp/spdk2.sock 00:12:51.024 17:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:12:51.024 17:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63995 ']' 00:12:51.024 17:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:51.024 17:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.024 17:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:51.025 17:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.025 17:13:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:51.025 [2024-07-24 17:13:37.259415] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:51.025 [2024-07-24 17:13:37.259831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63995 ] 00:12:51.282 [2024-07-24 17:13:37.438652] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:51.282 [2024-07-24 17:13:37.438743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.845 [2024-07-24 17:13:37.899089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.742 17:13:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:53.742 17:13:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:12:53.742 17:13:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 63973 00:12:53.742 17:13:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63973 00:12:53.742 17:13:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:54.673 17:13:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 63973 00:12:54.673 17:13:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63973 ']' 00:12:54.673 17:13:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 63973 00:12:54.673 17:13:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:12:54.673 17:13:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:54.673 17:13:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63973 00:12:54.673 killing process with pid 63973 00:12:54.673 17:13:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:54.673 17:13:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:54.674 17:13:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63973' 00:12:54.674 17:13:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 63973 00:12:54.674 17:13:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 63973 00:12:58.874 17:13:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63995 00:12:58.874 17:13:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63995 ']' 00:12:58.874 17:13:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 63995 00:12:58.874 17:13:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:12:58.874 17:13:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:58.874 17:13:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63995 00:12:58.874 killing process with pid 63995 00:12:58.874 17:13:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:58.874 17:13:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:58.875 17:13:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63995' 00:12:58.875 17:13:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 63995 00:12:58.875 17:13:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 63995 00:13:01.404 00:13:01.404 real 0m11.275s 00:13:01.404 user 0m11.672s 00:13:01.404 sys 0m1.459s 00:13:01.404 17:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:01.404 17:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:01.404 ************************************ 00:13:01.404 END TEST non_locking_app_on_locked_coremask 00:13:01.404 ************************************ 00:13:01.404 17:13:47 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:13:01.404 17:13:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:01.404 17:13:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:01.404 17:13:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:01.404 ************************************ 00:13:01.404 START TEST locking_app_on_unlocked_coremask 00:13:01.404 ************************************ 00:13:01.404 17:13:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:13:01.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.404 17:13:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64144 00:13:01.404 17:13:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64144 /var/tmp/spdk.sock 00:13:01.404 17:13:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64144 ']' 00:13:01.404 17:13:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.404 17:13:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:01.404 17:13:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:13:01.404 17:13:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.404 17:13:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:01.404 17:13:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:01.404 [2024-07-24 17:13:47.216999] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:01.404 [2024-07-24 17:13:47.217218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64144 ] 00:13:01.404 [2024-07-24 17:13:47.389894] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:01.404 [2024-07-24 17:13:47.389958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.404 [2024-07-24 17:13:47.618569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:02.337 17:13:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:02.337 17:13:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:13:02.337 17:13:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64160 00:13:02.337 17:13:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64160 /var/tmp/spdk2.sock 00:13:02.337 17:13:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:13:02.337 17:13:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64160 ']' 00:13:02.337 17:13:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:02.337 17:13:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:02.337 17:13:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:02.337 17:13:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:02.337 17:13:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:02.337 [2024-07-24 17:13:48.511220] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:02.337 [2024-07-24 17:13:48.511906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64160 ] 00:13:02.594 [2024-07-24 17:13:48.687701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.202 [2024-07-24 17:13:49.153361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.101 17:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:05.101 17:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:13:05.101 17:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64160 00:13:05.101 17:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64160 00:13:05.101 17:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:06.033 17:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64144 00:13:06.033 17:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64144 ']' 00:13:06.033 17:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 64144 00:13:06.033 17:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:13:06.033 17:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:06.033 17:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64144 00:13:06.033 killing process with pid 64144 00:13:06.033 17:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:06.033 17:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:06.033 17:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64144' 00:13:06.033 17:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 64144 00:13:06.033 17:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 64144 00:13:10.219 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64160 00:13:10.219 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64160 ']' 00:13:10.219 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 64160 00:13:10.219 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:13:10.219 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:10.219 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64160 00:13:10.219 killing process with pid 64160 00:13:10.219 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:10.219 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:10.219 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64160' 00:13:10.219 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 64160 00:13:10.219 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 64160 00:13:12.750 ************************************ 00:13:12.750 END TEST locking_app_on_unlocked_coremask 00:13:12.750 ************************************ 00:13:12.750 00:13:12.750 real 0m11.333s 00:13:12.750 user 0m11.771s 00:13:12.750 sys 0m1.478s 00:13:12.750 17:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:12.750 17:13:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:12.750 17:13:58 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:13:12.750 17:13:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:12.750 17:13:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:12.750 17:13:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:12.750 ************************************ 00:13:12.750 START TEST locking_app_on_locked_coremask 00:13:12.750 ************************************ 00:13:12.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.750 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:13:12.750 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64308 00:13:12.750 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64308 /var/tmp/spdk.sock 00:13:12.750 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:12.750 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64308 ']' 00:13:12.750 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.750 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:12.750 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.750 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:12.750 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:12.750 [2024-07-24 17:13:58.585702] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:12.750 [2024-07-24 17:13:58.586131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64308 ] 00:13:12.750 [2024-07-24 17:13:58.751424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.750 [2024-07-24 17:13:58.974778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.684 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:13.684 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:13:13.684 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:13:13.684 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64324 00:13:13.684 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64324 /var/tmp/spdk2.sock 00:13:13.684 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:13:13.684 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64324 /var/tmp/spdk2.sock 00:13:13.684 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:13:13.684 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:13.684 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:13:13.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:13.684 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:13.684 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 64324 /var/tmp/spdk2.sock 00:13:13.684 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64324 ']' 00:13:13.684 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:13.684 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:13.684 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:13.684 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:13.684 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:13.684 [2024-07-24 17:13:59.835241] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:13.684 [2024-07-24 17:13:59.835386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64324 ] 00:13:13.942 [2024-07-24 17:14:00.007423] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64308 has claimed it. 00:13:13.942 [2024-07-24 17:14:00.007546] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:14.509 ERROR: process (pid: 64324) is no longer running 00:13:14.509 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64324) - No such process 00:13:14.509 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:14.509 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:13:14.509 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:13:14.509 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:14.509 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:14.509 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:14.509 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64308 00:13:14.509 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64308 00:13:14.509 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:14.768 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64308 00:13:14.768 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64308 ']' 00:13:14.768 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 64308 00:13:14.768 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:13:14.768 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.768 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64308 00:13:14.768 killing process with pid 64308 00:13:14.768 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:14.768 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:14.768 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64308' 00:13:14.768 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 64308 00:13:14.768 17:14:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 64308 00:13:17.298 ************************************ 00:13:17.298 END TEST locking_app_on_locked_coremask 00:13:17.298 ************************************ 00:13:17.298 00:13:17.298 real 0m4.595s 00:13:17.298 user 0m4.838s 00:13:17.298 sys 0m0.834s 00:13:17.298 17:14:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:17.298 17:14:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:17.298 17:14:03 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:13:17.298 17:14:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:17.298 17:14:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:17.299 17:14:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:17.299 ************************************ 00:13:17.299 START TEST locking_overlapped_coremask 00:13:17.299 ************************************ 00:13:17.299 17:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:13:17.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.299 17:14:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64388 00:13:17.299 17:14:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:13:17.299 17:14:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64388 /var/tmp/spdk.sock 00:13:17.299 17:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 64388 ']' 00:13:17.299 17:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.299 17:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:17.299 17:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.299 17:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:17.299 17:14:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:17.299 [2024-07-24 17:14:03.246419] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:17.299 [2024-07-24 17:14:03.246619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64388 ] 00:13:17.299 [2024-07-24 17:14:03.423922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:17.556 [2024-07-24 17:14:03.662714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.556 [2024-07-24 17:14:03.662799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.556 [2024-07-24 17:14:03.662804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.489 17:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:18.489 17:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:13:18.489 17:14:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64412 00:13:18.489 17:14:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:13:18.489 17:14:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64412 /var/tmp/spdk2.sock 00:13:18.489 17:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:13:18.489 17:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64412 /var/tmp/spdk2.sock 00:13:18.489 17:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:13:18.489 17:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:18.490 17:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:13:18.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:18.490 17:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:18.490 17:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 64412 /var/tmp/spdk2.sock 00:13:18.490 17:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 64412 ']' 00:13:18.490 17:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:18.490 17:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:18.490 17:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:18.490 17:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:18.490 17:14:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:18.490 [2024-07-24 17:14:04.566290] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:18.490 [2024-07-24 17:14:04.566520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64412 ] 00:13:18.747 [2024-07-24 17:14:04.750916] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64388 has claimed it. 00:13:18.747 [2024-07-24 17:14:04.750998] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:19.312 ERROR: process (pid: 64412) is no longer running 00:13:19.312 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64412) - No such process 00:13:19.312 17:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64388 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 64388 ']' 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 64388 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64388 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64388' 00:13:19.313 killing process with pid 64388 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 64388 00:13:19.313 17:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 64388 00:13:21.840 00:13:21.840 real 0m4.399s 00:13:21.840 user 0m11.420s 00:13:21.840 sys 0m0.674s 00:13:21.840 17:14:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:21.840 17:14:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:21.840 ************************************ 00:13:21.840 END TEST locking_overlapped_coremask 00:13:21.840 ************************************ 00:13:21.840 17:14:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:13:21.840 17:14:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:21.840 17:14:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:21.840 17:14:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:21.840 ************************************ 00:13:21.840 START TEST locking_overlapped_coremask_via_rpc 00:13:21.840 ************************************ 00:13:21.840 17:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:13:21.840 17:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64476 00:13:21.840 17:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64476 /var/tmp/spdk.sock 00:13:21.840 17:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:13:21.840 17:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64476 ']' 00:13:21.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.840 17:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.840 17:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:21.840 17:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.840 17:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:21.840 17:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.840 [2024-07-24 17:14:07.705070] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:21.840 [2024-07-24 17:14:07.705546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64476 ] 00:13:21.840 [2024-07-24 17:14:07.883267] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:21.840 [2024-07-24 17:14:07.883542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:22.099 [2024-07-24 17:14:08.115767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.099 [2024-07-24 17:14:08.115910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.099 [2024-07-24 17:14:08.115926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.040 17:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:23.040 17:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:23.040 17:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:13:23.040 17:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64499 00:13:23.040 17:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64499 /var/tmp/spdk2.sock 00:13:23.040 17:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64499 ']' 00:13:23.040 17:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:23.040 17:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:23.040 17:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:23.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:23.040 17:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:23.040 17:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.040 [2024-07-24 17:14:09.013537] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:23.040 [2024-07-24 17:14:09.013949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64499 ] 00:13:23.040 [2024-07-24 17:14:09.188147] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:23.040 [2024-07-24 17:14:09.188217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:23.609 [2024-07-24 17:14:09.679592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.609 [2024-07-24 17:14:09.679692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.609 [2024-07-24 17:14:09.679705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.508 [2024-07-24 17:14:11.705896] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64476 has claimed it. 00:13:25.508 request: 00:13:25.508 { 00:13:25.508 "method": "framework_enable_cpumask_locks", 00:13:25.508 "req_id": 1 00:13:25.508 } 00:13:25.508 Got JSON-RPC error response 00:13:25.508 response: 00:13:25.508 { 00:13:25.508 "code": -32603, 00:13:25.508 "message": "Failed to claim CPU core: 2" 00:13:25.508 } 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:25.508 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:25.509 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64476 /var/tmp/spdk.sock 00:13:25.509 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64476 ']' 00:13:25.509 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.509 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:25.509 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.509 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:25.509 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.767 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:25.767 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:25.767 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64499 /var/tmp/spdk2.sock 00:13:25.767 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64499 ']' 00:13:25.767 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:25.767 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:25.767 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:25.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:25.767 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:25.767 17:14:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.025 17:14:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:26.025 17:14:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:26.025 ************************************ 00:13:26.025 END TEST locking_overlapped_coremask_via_rpc 00:13:26.025 ************************************ 00:13:26.025 17:14:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:13:26.025 17:14:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:26.025 17:14:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:26.025 17:14:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:26.025 00:13:26.025 real 0m4.648s 00:13:26.025 user 0m1.519s 00:13:26.025 sys 0m0.214s 00:13:26.025 17:14:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.025 17:14:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.283 17:14:12 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:13:26.283 17:14:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64476 ]] 00:13:26.283 17:14:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64476 00:13:26.283 17:14:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64476 ']' 00:13:26.283 17:14:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64476 00:13:26.283 17:14:12 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:13:26.283 17:14:12 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:26.283 17:14:12 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64476 00:13:26.283 17:14:12 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:26.283 killing process with pid 64476 00:13:26.283 17:14:12 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:26.283 17:14:12 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64476' 00:13:26.283 17:14:12 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 64476 00:13:26.283 17:14:12 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 64476 00:13:28.811 17:14:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64499 ]] 00:13:28.811 17:14:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64499 00:13:28.811 17:14:14 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64499 ']' 00:13:28.811 17:14:14 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64499 00:13:28.811 17:14:14 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:13:28.811 17:14:14 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:28.811 17:14:14 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64499 00:13:28.811 killing process with pid 64499 00:13:28.811 17:14:14 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:28.811 17:14:14 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:28.811 17:14:14 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64499' 00:13:28.811 17:14:14 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 64499 00:13:28.811 17:14:14 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 64499 00:13:30.711 17:14:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:30.711 17:14:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:13:30.711 17:14:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64476 ]] 00:13:30.711 17:14:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64476 00:13:30.711 17:14:16 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64476 ']' 00:13:30.711 17:14:16 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64476 00:13:30.712 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (64476) - No such process 00:13:30.712 17:14:16 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 64476 is not found' 00:13:30.712 Process with pid 64476 is not found 00:13:30.712 17:14:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64499 ]] 00:13:30.712 17:14:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64499 00:13:30.712 17:14:16 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64499 ']' 00:13:30.712 Process with pid 64499 is not found 00:13:30.712 17:14:16 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64499 00:13:30.712 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (64499) - No such process 00:13:30.712 17:14:16 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 64499 is not found' 00:13:30.712 17:14:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:30.712 ************************************ 00:13:30.712 END TEST cpu_locks 00:13:30.712 ************************************ 00:13:30.712 00:13:30.712 real 0m49.019s 00:13:30.712 user 1m22.966s 00:13:30.712 sys 0m7.227s 00:13:30.712 17:14:16 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:30.712 17:14:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:30.712 ************************************ 00:13:30.712 END TEST event 00:13:30.712 ************************************ 00:13:30.712 00:13:30.712 real 1m20.940s 00:13:30.712 user 2m23.738s 00:13:30.712 sys 0m11.296s 00:13:30.712 17:14:16 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:30.712 17:14:16 event -- common/autotest_common.sh@10 -- # set +x 00:13:30.712 17:14:16 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:30.712 17:14:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:30.712 17:14:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:30.712 17:14:16 -- common/autotest_common.sh@10 -- # set +x 00:13:30.712 ************************************ 00:13:30.712 START TEST thread 00:13:30.712 ************************************ 00:13:30.712 17:14:16 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:30.969 * Looking for test storage... 00:13:30.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:13:30.969 17:14:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:30.969 17:14:16 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:13:30.969 17:14:16 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:30.969 17:14:16 thread -- common/autotest_common.sh@10 -- # set +x 00:13:30.969 ************************************ 00:13:30.969 START TEST thread_poller_perf 00:13:30.969 ************************************ 00:13:30.969 17:14:17 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:30.969 [2024-07-24 17:14:17.050486] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:30.969 [2024-07-24 17:14:17.050712] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64683 ] 00:13:31.227 [2024-07-24 17:14:17.218326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.486 [2024-07-24 17:14:17.501587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.486 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:13:32.860 ====================================== 00:13:32.860 busy:2208312755 (cyc) 00:13:32.860 total_run_count: 321000 00:13:32.860 tsc_hz: 2200000000 (cyc) 00:13:32.860 ====================================== 00:13:32.860 poller_cost: 6879 (cyc), 3126 (nsec) 00:13:32.860 00:13:32.860 real 0m1.880s 00:13:32.860 user 0m1.652s 00:13:32.860 sys 0m0.116s 00:13:32.860 17:14:18 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.860 ************************************ 00:13:32.860 END TEST thread_poller_perf 00:13:32.860 ************************************ 00:13:32.860 17:14:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:32.860 17:14:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:32.860 17:14:18 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:13:32.860 17:14:18 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.860 17:14:18 thread -- common/autotest_common.sh@10 -- # set +x 00:13:32.860 ************************************ 00:13:32.860 START TEST thread_poller_perf 00:13:32.860 ************************************ 00:13:32.860 17:14:18 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:32.860 [2024-07-24 17:14:18.991768] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:32.860 [2024-07-24 17:14:18.991919] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64725 ] 00:13:33.118 [2024-07-24 17:14:19.166163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.376 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:13:33.376 [2024-07-24 17:14:19.387252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.749 ====================================== 00:13:34.749 busy:2204118921 (cyc) 00:13:34.749 total_run_count: 3957000 00:13:34.749 tsc_hz: 2200000000 (cyc) 00:13:34.749 ====================================== 00:13:34.749 poller_cost: 557 (cyc), 253 (nsec) 00:13:34.749 00:13:34.749 real 0m1.821s 00:13:34.749 user 0m1.579s 00:13:34.749 sys 0m0.131s 00:13:34.749 17:14:20 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:34.749 ************************************ 00:13:34.749 END TEST thread_poller_perf 00:13:34.749 ************************************ 00:13:34.749 17:14:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:34.749 17:14:20 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:13:34.749 ************************************ 00:13:34.749 END TEST thread 00:13:34.749 ************************************ 00:13:34.749 00:13:34.749 real 0m3.892s 00:13:34.749 user 0m3.294s 00:13:34.749 sys 0m0.364s 00:13:34.749 17:14:20 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:34.749 17:14:20 thread -- common/autotest_common.sh@10 -- # set +x 00:13:34.750 17:14:20 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:13:34.750 17:14:20 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:34.750 17:14:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:34.750 17:14:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:34.750 17:14:20 -- common/autotest_common.sh@10 -- # set +x 00:13:34.750 ************************************ 00:13:34.750 START TEST app_cmdline 00:13:34.750 ************************************ 00:13:34.750 17:14:20 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:34.750 * Looking for test storage... 00:13:34.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:34.750 17:14:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:34.750 17:14:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64806 00:13:34.750 17:14:20 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:34.750 17:14:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64806 00:13:34.750 17:14:20 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 64806 ']' 00:13:34.750 17:14:20 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.750 17:14:20 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:34.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.750 17:14:20 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.750 17:14:20 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:34.750 17:14:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:35.008 [2024-07-24 17:14:21.045525] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:35.008 [2024-07-24 17:14:21.045964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64806 ] 00:13:35.008 [2024-07-24 17:14:21.211914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.266 [2024-07-24 17:14:21.446503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.199 17:14:22 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:36.199 17:14:22 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:13:36.199 17:14:22 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:36.457 { 00:13:36.457 "version": "SPDK v24.09-pre git sha1 dca21ec0f", 00:13:36.457 "fields": { 00:13:36.457 "major": 24, 00:13:36.457 "minor": 9, 00:13:36.457 "patch": 0, 00:13:36.457 "suffix": "-pre", 00:13:36.457 "commit": "dca21ec0f" 00:13:36.457 } 00:13:36.457 } 00:13:36.457 17:14:22 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:13:36.457 17:14:22 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:36.457 17:14:22 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:36.457 17:14:22 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:36.457 17:14:22 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:36.457 17:14:22 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:36.457 17:14:22 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.457 17:14:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:36.457 17:14:22 app_cmdline -- app/cmdline.sh@26 -- # sort 00:13:36.457 17:14:22 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.457 17:14:22 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:36.457 17:14:22 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:36.457 17:14:22 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:36.457 17:14:22 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:13:36.457 17:14:22 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:36.457 17:14:22 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:36.457 17:14:22 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.457 17:14:22 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:36.457 17:14:22 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.457 17:14:22 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:36.457 17:14:22 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.457 17:14:22 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:36.457 17:14:22 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:36.457 17:14:22 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:36.715 request: 00:13:36.715 { 00:13:36.715 "method": "env_dpdk_get_mem_stats", 00:13:36.715 "req_id": 1 00:13:36.715 } 00:13:36.715 Got JSON-RPC error response 00:13:36.715 response: 00:13:36.715 { 00:13:36.715 "code": -32601, 00:13:36.715 "message": "Method not found" 00:13:36.715 } 00:13:36.715 17:14:22 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:13:36.715 17:14:22 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:36.715 17:14:22 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:36.715 17:14:22 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:36.715 17:14:22 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64806 00:13:36.715 17:14:22 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 64806 ']' 00:13:36.715 17:14:22 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 64806 00:13:36.715 17:14:22 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:13:36.715 17:14:22 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:36.715 17:14:22 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64806 00:13:36.715 killing process with pid 64806 00:13:36.715 17:14:22 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:36.715 17:14:22 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:36.715 17:14:22 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64806' 00:13:36.715 17:14:22 app_cmdline -- common/autotest_common.sh@969 -- # kill 64806 00:13:36.715 17:14:22 app_cmdline -- common/autotest_common.sh@974 -- # wait 64806 00:13:39.242 ************************************ 00:13:39.242 END TEST app_cmdline 00:13:39.242 ************************************ 00:13:39.242 00:13:39.242 real 0m4.051s 00:13:39.242 user 0m4.441s 00:13:39.242 sys 0m0.607s 00:13:39.242 17:14:24 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.242 17:14:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:39.242 17:14:24 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:39.242 17:14:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:39.242 17:14:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.242 17:14:24 -- common/autotest_common.sh@10 -- # set +x 00:13:39.242 ************************************ 00:13:39.242 START TEST version 00:13:39.242 ************************************ 00:13:39.242 17:14:24 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:39.242 * Looking for test storage... 00:13:39.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:39.242 17:14:25 version -- app/version.sh@17 -- # get_header_version major 00:13:39.242 17:14:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:39.242 17:14:25 version -- app/version.sh@14 -- # cut -f2 00:13:39.242 17:14:25 version -- app/version.sh@14 -- # tr -d '"' 00:13:39.242 17:14:25 version -- app/version.sh@17 -- # major=24 00:13:39.242 17:14:25 version -- app/version.sh@18 -- # get_header_version minor 00:13:39.242 17:14:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:39.242 17:14:25 version -- app/version.sh@14 -- # cut -f2 00:13:39.242 17:14:25 version -- app/version.sh@14 -- # tr -d '"' 00:13:39.242 17:14:25 version -- app/version.sh@18 -- # minor=9 00:13:39.242 17:14:25 version -- app/version.sh@19 -- # get_header_version patch 00:13:39.242 17:14:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:39.242 17:14:25 version -- app/version.sh@14 -- # cut -f2 00:13:39.242 17:14:25 version -- app/version.sh@14 -- # tr -d '"' 00:13:39.242 17:14:25 version -- app/version.sh@19 -- # patch=0 00:13:39.242 17:14:25 version -- app/version.sh@20 -- # get_header_version suffix 00:13:39.242 17:14:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:39.242 17:14:25 version -- app/version.sh@14 -- # cut -f2 00:13:39.242 17:14:25 version -- app/version.sh@14 -- # tr -d '"' 00:13:39.242 17:14:25 version -- app/version.sh@20 -- # suffix=-pre 00:13:39.242 17:14:25 version -- app/version.sh@22 -- # version=24.9 00:13:39.242 17:14:25 version -- app/version.sh@25 -- # (( patch != 0 )) 00:13:39.242 17:14:25 version -- app/version.sh@28 -- # version=24.9rc0 00:13:39.242 17:14:25 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:39.242 17:14:25 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:39.242 17:14:25 version -- app/version.sh@30 -- # py_version=24.9rc0 00:13:39.242 17:14:25 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:13:39.242 00:13:39.242 real 0m0.147s 00:13:39.242 user 0m0.085s 00:13:39.242 sys 0m0.091s 00:13:39.242 ************************************ 00:13:39.242 END TEST version 00:13:39.242 ************************************ 00:13:39.242 17:14:25 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.242 17:14:25 version -- common/autotest_common.sh@10 -- # set +x 00:13:39.242 17:14:25 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:13:39.242 17:14:25 -- spdk/autotest.sh@202 -- # uname -s 00:13:39.242 17:14:25 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:13:39.242 17:14:25 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:13:39.242 17:14:25 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:13:39.242 17:14:25 -- spdk/autotest.sh@215 -- # '[' 1 -eq 1 ']' 00:13:39.242 17:14:25 -- spdk/autotest.sh@216 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:13:39.242 17:14:25 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:39.242 17:14:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.242 17:14:25 -- common/autotest_common.sh@10 -- # set +x 00:13:39.242 ************************************ 00:13:39.242 START TEST blockdev_nvme 00:13:39.242 ************************************ 00:13:39.242 17:14:25 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:13:39.242 * Looking for test storage... 00:13:39.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:39.242 17:14:25 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:39.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=64973 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 64973 00:13:39.242 17:14:25 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 64973 ']' 00:13:39.242 17:14:25 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:39.242 17:14:25 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.242 17:14:25 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:39.242 17:14:25 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.242 17:14:25 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:39.242 17:14:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.242 [2024-07-24 17:14:25.376513] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:39.242 [2024-07-24 17:14:25.377050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64973 ] 00:13:39.499 [2024-07-24 17:14:25.555256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.780 [2024-07-24 17:14:25.826105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.357 17:14:26 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:40.357 17:14:26 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:13:40.357 17:14:26 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:40.357 17:14:26 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:13:40.357 17:14:26 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:13:40.357 17:14:26 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:13:40.357 17:14:26 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:40.615 17:14:26 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:13:40.615 17:14:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.615 17:14:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.873 17:14:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.873 17:14:26 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:40.873 17:14:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.873 17:14:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.873 17:14:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.873 17:14:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:13:40.873 17:14:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:40.873 17:14:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.873 17:14:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.873 17:14:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.873 17:14:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:40.873 17:14:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.873 17:14:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.873 17:14:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.873 17:14:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:40.873 17:14:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.873 17:14:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.873 17:14:27 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.873 17:14:27 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:40.873 17:14:27 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:40.873 17:14:27 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:40.873 17:14:27 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.873 17:14:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.873 17:14:27 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.873 17:14:27 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:40.873 17:14:27 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:40.874 17:14:27 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9b160256-104d-4bad-808b-5c18a54a2ba6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9b160256-104d-4bad-808b-5c18a54a2ba6",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d27b8ee6-aaf7-4d03-a2cf-74db33770156"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d27b8ee6-aaf7-4d03-a2cf-74db33770156",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2dfe7459-601d-414d-9abf-f23434010821"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2dfe7459-601d-414d-9abf-f23434010821",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "e57c0db3-6ab2-40c7-a085-dd690a2bcc13"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e57c0db3-6ab2-40c7-a085-dd690a2bcc13",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ecf516ac-2b1c-4b35-8fa3-90b8176c4cf5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ecf516ac-2b1c-4b35-8fa3-90b8176c4cf5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "dfe58691-4f63-48e4-8902-da87c2426749"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "dfe58691-4f63-48e4-8902-da87c2426749",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:13:41.132 17:14:27 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:41.132 17:14:27 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:13:41.132 17:14:27 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:41.132 17:14:27 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 64973 00:13:41.132 17:14:27 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 64973 ']' 00:13:41.132 17:14:27 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 64973 00:13:41.132 17:14:27 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:13:41.132 17:14:27 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:41.132 17:14:27 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64973 00:13:41.132 killing process with pid 64973 00:13:41.132 17:14:27 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:41.132 17:14:27 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:41.132 17:14:27 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64973' 00:13:41.132 17:14:27 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 64973 00:13:41.132 17:14:27 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 64973 00:13:43.028 17:14:29 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:43.028 17:14:29 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:13:43.028 17:14:29 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:43.028 17:14:29 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.028 17:14:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.028 ************************************ 00:13:43.028 START TEST bdev_hello_world 00:13:43.028 ************************************ 00:13:43.028 17:14:29 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:13:43.285 [2024-07-24 17:14:29.356035] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:43.285 [2024-07-24 17:14:29.356211] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65068 ] 00:13:43.542 [2024-07-24 17:14:29.530263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.542 [2024-07-24 17:14:29.756871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.471 [2024-07-24 17:14:30.425925] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:44.471 [2024-07-24 17:14:30.425989] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:13:44.471 [2024-07-24 17:14:30.426032] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:44.471 [2024-07-24 17:14:30.429085] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:44.471 [2024-07-24 17:14:30.429564] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:44.471 [2024-07-24 17:14:30.429606] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:44.471 [2024-07-24 17:14:30.429817] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:44.471 00:13:44.471 [2024-07-24 17:14:30.429872] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:45.403 00:13:45.403 real 0m2.286s 00:13:45.403 user 0m1.866s 00:13:45.403 sys 0m0.308s 00:13:45.403 17:14:31 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:45.404 ************************************ 00:13:45.404 END TEST bdev_hello_world 00:13:45.404 ************************************ 00:13:45.404 17:14:31 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:45.404 17:14:31 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:45.404 17:14:31 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:45.404 17:14:31 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:45.404 17:14:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:45.404 ************************************ 00:13:45.404 START TEST bdev_bounds 00:13:45.404 ************************************ 00:13:45.404 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:13:45.404 17:14:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=65110 00:13:45.404 17:14:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:45.404 17:14:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:45.404 Process bdevio pid: 65110 00:13:45.404 17:14:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 65110' 00:13:45.404 17:14:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 65110 00:13:45.404 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 65110 ']' 00:13:45.404 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.404 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:45.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.404 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.404 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:45.404 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:45.661 [2024-07-24 17:14:31.678837] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:45.661 [2024-07-24 17:14:31.679028] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65110 ] 00:13:45.661 [2024-07-24 17:14:31.842215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:45.919 [2024-07-24 17:14:32.060973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.919 [2024-07-24 17:14:32.061118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.919 [2024-07-24 17:14:32.061150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.851 17:14:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:46.851 17:14:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:13:46.851 17:14:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:46.851 I/O targets: 00:13:46.851 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:46.851 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:46.851 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:46.851 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:46.851 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:46.851 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:46.851 00:13:46.851 00:13:46.851 CUnit - A unit testing framework for C - Version 2.1-3 00:13:46.851 http://cunit.sourceforge.net/ 00:13:46.851 00:13:46.851 00:13:46.851 Suite: bdevio tests on: Nvme3n1 00:13:46.851 Test: blockdev write read block ...passed 00:13:46.851 Test: blockdev write zeroes read block ...passed 00:13:46.851 Test: blockdev write zeroes read no split ...passed 00:13:46.851 Test: blockdev write zeroes read split ...passed 00:13:46.851 Test: blockdev write zeroes read split partial ...passed 00:13:46.851 Test: blockdev reset ...[2024-07-24 17:14:32.923454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:13:46.851 passed 00:13:46.851 Test: blockdev write read 8 blocks ...[2024-07-24 17:14:32.927363] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:46.851 passed 00:13:46.851 Test: blockdev write read size > 128k ...passed 00:13:46.851 Test: blockdev write read invalid size ...passed 00:13:46.851 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:46.851 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:46.851 Test: blockdev write read max offset ...passed 00:13:46.851 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:46.851 Test: blockdev writev readv 8 blocks ...passed 00:13:46.851 Test: blockdev writev readv 30 x 1block ...passed 00:13:46.851 Test: blockdev writev readv block ...passed 00:13:46.851 Test: blockdev writev readv size > 128k ...passed 00:13:46.851 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:46.851 Test: blockdev comparev and writev ...[2024-07-24 17:14:32.935483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26e20a000 len:0x1000 00:13:46.851 [2024-07-24 17:14:32.935561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:46.851 passed 00:13:46.851 Test: blockdev nvme passthru rw ...passed 00:13:46.851 Test: blockdev nvme passthru vendor specific ...passed 00:13:46.851 Test: blockdev nvme admin passthru ...[2024-07-24 17:14:32.936452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:46.851 [2024-07-24 17:14:32.936502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:46.851 passed 00:13:46.851 Test: blockdev copy ...passed 00:13:46.851 Suite: bdevio tests on: Nvme2n3 00:13:46.851 Test: blockdev write read block ...passed 00:13:46.851 Test: blockdev write zeroes read block ...passed 00:13:46.851 Test: blockdev write zeroes read no split ...passed 00:13:46.851 Test: blockdev write zeroes read split ...passed 00:13:46.851 Test: blockdev write zeroes read split partial ...passed 00:13:46.851 Test: blockdev reset ...[2024-07-24 17:14:33.002452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:13:46.851 [2024-07-24 17:14:33.006900] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:46.851 passed 00:13:46.851 Test: blockdev write read 8 blocks ...passed 00:13:46.851 Test: blockdev write read size > 128k ...passed 00:13:46.851 Test: blockdev write read invalid size ...passed 00:13:46.851 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:46.851 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:46.851 Test: blockdev write read max offset ...passed 00:13:46.851 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:46.851 Test: blockdev writev readv 8 blocks ...passed 00:13:46.851 Test: blockdev writev readv 30 x 1block ...passed 00:13:46.851 Test: blockdev writev readv block ...passed 00:13:46.852 Test: blockdev writev readv size > 128k ...passed 00:13:46.852 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:46.852 Test: blockdev comparev and writev ...[2024-07-24 17:14:33.015491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x250004000 len:0x1000 00:13:46.852 [2024-07-24 17:14:33.015550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:46.852 passed 00:13:46.852 Test: blockdev nvme passthru rw ...passed 00:13:46.852 Test: blockdev nvme passthru vendor specific ...passed 00:13:46.852 Test: blockdev nvme admin passthru ...[2024-07-24 17:14:33.016463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:46.852 [2024-07-24 17:14:33.016509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:46.852 passed 00:13:46.852 Test: blockdev copy ...passed 00:13:46.852 Suite: bdevio tests on: Nvme2n2 00:13:46.852 Test: blockdev write read block ...passed 00:13:46.852 Test: blockdev write zeroes read block ...passed 00:13:46.852 Test: blockdev write zeroes read no split ...passed 00:13:46.852 Test: blockdev write zeroes read split ...passed 00:13:46.852 Test: blockdev write zeroes read split partial ...passed 00:13:46.852 Test: blockdev reset ...[2024-07-24 17:14:33.079456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:13:46.852 [2024-07-24 17:14:33.083827] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:46.852 passed 00:13:46.852 Test: blockdev write read 8 blocks ...passed 00:13:46.852 Test: blockdev write read size > 128k ...passed 00:13:46.852 Test: blockdev write read invalid size ...passed 00:13:46.852 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:46.852 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:46.852 Test: blockdev write read max offset ...passed 00:13:46.852 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:47.110 Test: blockdev writev readv 8 blocks ...passed 00:13:47.110 Test: blockdev writev readv 30 x 1block ...passed 00:13:47.110 Test: blockdev writev readv block ...passed 00:13:47.110 Test: blockdev writev readv size > 128k ...passed 00:13:47.110 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:47.110 Test: blockdev comparev and writev ...[2024-07-24 17:14:33.093417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28023a000 len:0x1000 00:13:47.110 [2024-07-24 17:14:33.093476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:47.110 passed 00:13:47.110 Test: blockdev nvme passthru rw ...passed 00:13:47.110 Test: blockdev nvme passthru vendor specific ...passed 00:13:47.110 Test: blockdev nvme admin passthru ...[2024-07-24 17:14:33.094321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:47.110 [2024-07-24 17:14:33.094368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:47.110 passed 00:13:47.110 Test: blockdev copy ...passed 00:13:47.110 Suite: bdevio tests on: Nvme2n1 00:13:47.110 Test: blockdev write read block ...passed 00:13:47.110 Test: blockdev write zeroes read block ...passed 00:13:47.110 Test: blockdev write zeroes read no split ...passed 00:13:47.110 Test: blockdev write zeroes read split ...passed 00:13:47.110 Test: blockdev write zeroes read split partial ...passed 00:13:47.110 Test: blockdev reset ...[2024-07-24 17:14:33.168907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:13:47.110 [2024-07-24 17:14:33.173182] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:47.110 passed 00:13:47.110 Test: blockdev write read 8 blocks ...passed 00:13:47.110 Test: blockdev write read size > 128k ...passed 00:13:47.110 Test: blockdev write read invalid size ...passed 00:13:47.110 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:47.110 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:47.110 Test: blockdev write read max offset ...passed 00:13:47.110 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:47.110 Test: blockdev writev readv 8 blocks ...passed 00:13:47.110 Test: blockdev writev readv 30 x 1block ...passed 00:13:47.110 Test: blockdev writev readv block ...passed 00:13:47.110 Test: blockdev writev readv size > 128k ...passed 00:13:47.110 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:47.110 Test: blockdev comparev and writev ...[2024-07-24 17:14:33.183121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x280234000 len:0x1000 00:13:47.110 [2024-07-24 17:14:33.183196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:47.110 passed 00:13:47.110 Test: blockdev nvme passthru rw ...passed 00:13:47.110 Test: blockdev nvme passthru vendor specific ...passed 00:13:47.110 Test: blockdev nvme admin passthru ...[2024-07-24 17:14:33.184102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:47.110 [2024-07-24 17:14:33.184145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:47.110 passed 00:13:47.110 Test: blockdev copy ...passed 00:13:47.110 Suite: bdevio tests on: Nvme1n1 00:13:47.110 Test: blockdev write read block ...passed 00:13:47.110 Test: blockdev write zeroes read block ...passed 00:13:47.110 Test: blockdev write zeroes read no split ...passed 00:13:47.110 Test: blockdev write zeroes read split ...passed 00:13:47.110 Test: blockdev write zeroes read split partial ...passed 00:13:47.110 Test: blockdev reset ...[2024-07-24 17:14:33.258482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:13:47.110 [2024-07-24 17:14:33.263146] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:47.110 passed 00:13:47.110 Test: blockdev write read 8 blocks ...passed 00:13:47.110 Test: blockdev write read size > 128k ...passed 00:13:47.110 Test: blockdev write read invalid size ...passed 00:13:47.110 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:47.110 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:47.110 Test: blockdev write read max offset ...passed 00:13:47.110 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:47.110 Test: blockdev writev readv 8 blocks ...passed 00:13:47.110 Test: blockdev writev readv 30 x 1block ...passed 00:13:47.110 Test: blockdev writev readv block ...passed 00:13:47.110 Test: blockdev writev readv size > 128k ...passed 00:13:47.110 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:47.110 Test: blockdev comparev and writev ...[2024-07-24 17:14:33.273874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x280230000 len:0x1000 00:13:47.110 [2024-07-24 17:14:33.273955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:47.110 passed 00:13:47.110 Test: blockdev nvme passthru rw ...passed 00:13:47.110 Test: blockdev nvme passthru vendor specific ...passed 00:13:47.110 Test: blockdev nvme admin passthru ...[2024-07-24 17:14:33.275089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:47.110 [2024-07-24 17:14:33.275160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:47.110 passed 00:13:47.110 Test: blockdev copy ...passed 00:13:47.110 Suite: bdevio tests on: Nvme0n1 00:13:47.111 Test: blockdev write read block ...passed 00:13:47.111 Test: blockdev write zeroes read block ...passed 00:13:47.111 Test: blockdev write zeroes read no split ...passed 00:13:47.111 Test: blockdev write zeroes read split ...passed 00:13:47.367 Test: blockdev write zeroes read split partial ...passed 00:13:47.367 Test: blockdev reset ...[2024-07-24 17:14:33.348719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:13:47.367 [2024-07-24 17:14:33.352583] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:47.367 passed 00:13:47.367 Test: blockdev write read 8 blocks ...passed 00:13:47.367 Test: blockdev write read size > 128k ...passed 00:13:47.367 Test: blockdev write read invalid size ...passed 00:13:47.367 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:47.367 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:47.367 Test: blockdev write read max offset ...passed 00:13:47.367 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:47.367 Test: blockdev writev readv 8 blocks ...passed 00:13:47.367 Test: blockdev writev readv 30 x 1block ...passed 00:13:47.367 Test: blockdev writev readv block ...passed 00:13:47.367 Test: blockdev writev readv size > 128k ...passed 00:13:47.367 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:47.367 Test: blockdev comparev and writev ...passed 00:13:47.367 Test: blockdev nvme passthru rw ...[2024-07-24 17:14:33.360954] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:13:47.367 separate metadata which is not supported yet. 00:13:47.367 passed 00:13:47.367 Test: blockdev nvme passthru vendor specific ...passed 00:13:47.367 Test: blockdev nvme admin passthru ...[2024-07-24 17:14:33.361725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:13:47.367 [2024-07-24 17:14:33.361783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:13:47.367 passed 00:13:47.367 Test: blockdev copy ...passed 00:13:47.367 00:13:47.367 Run Summary: Type Total Ran Passed Failed Inactive 00:13:47.367 suites 6 6 n/a 0 0 00:13:47.367 tests 138 138 138 0 0 00:13:47.367 asserts 893 893 893 0 n/a 00:13:47.367 00:13:47.367 Elapsed time = 1.370 seconds 00:13:47.367 0 00:13:47.367 17:14:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 65110 00:13:47.367 17:14:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 65110 ']' 00:13:47.367 17:14:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 65110 00:13:47.367 17:14:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:13:47.367 17:14:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:47.367 17:14:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65110 00:13:47.367 17:14:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:47.367 17:14:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:47.367 17:14:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65110' 00:13:47.367 killing process with pid 65110 00:13:47.367 17:14:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 65110 00:13:47.367 17:14:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 65110 00:13:48.299 17:14:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:48.299 00:13:48.299 real 0m2.815s 00:13:48.299 user 0m6.868s 00:13:48.299 sys 0m0.392s 00:13:48.299 17:14:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.299 17:14:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:48.299 ************************************ 00:13:48.299 END TEST bdev_bounds 00:13:48.299 ************************************ 00:13:48.299 17:14:34 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:48.299 17:14:34 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:48.299 17:14:34 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.299 17:14:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:48.299 ************************************ 00:13:48.299 START TEST bdev_nbd 00:13:48.299 ************************************ 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=65175 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 65175 /var/tmp/spdk-nbd.sock 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 65175 ']' 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:48.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:48.299 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:48.557 [2024-07-24 17:14:34.550623] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:48.557 [2024-07-24 17:14:34.550992] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.557 [2024-07-24 17:14:34.719498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.815 [2024-07-24 17:14:34.952621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:49.749 1+0 records in 00:13:49.749 1+0 records out 00:13:49.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476902 s, 8.6 MB/s 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:49.749 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.007 1+0 records in 00:13:50.007 1+0 records out 00:13:50.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670048 s, 6.1 MB/s 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:50.007 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:13:50.295 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:50.295 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:50.295 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:50.295 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:13:50.295 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:50.295 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:50.295 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:50.295 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:13:50.295 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:50.295 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:50.295 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:50.295 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.295 1+0 records in 00:13:50.295 1+0 records out 00:13:50.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538066 s, 7.6 MB/s 00:13:50.296 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.296 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:50.296 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.296 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:50.296 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:50.296 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:50.296 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:50.296 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:13:50.572 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:50.572 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:50.572 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:50.572 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:13:50.572 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:50.572 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:50.572 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:50.572 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:13:50.572 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:50.572 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:50.572 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:50.572 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.572 1+0 records in 00:13:50.572 1+0 records out 00:13:50.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000743861 s, 5.5 MB/s 00:13:50.572 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.830 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:50.830 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.830 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:50.830 17:14:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:50.830 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:50.830 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:50.830 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:51.088 1+0 records in 00:13:51.088 1+0 records out 00:13:51.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000680993 s, 6.0 MB/s 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:51.088 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:51.346 1+0 records in 00:13:51.346 1+0 records out 00:13:51.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000792971 s, 5.2 MB/s 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:51.346 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:51.604 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:51.604 { 00:13:51.604 "nbd_device": "/dev/nbd0", 00:13:51.604 "bdev_name": "Nvme0n1" 00:13:51.604 }, 00:13:51.604 { 00:13:51.604 "nbd_device": "/dev/nbd1", 00:13:51.604 "bdev_name": "Nvme1n1" 00:13:51.604 }, 00:13:51.604 { 00:13:51.604 "nbd_device": "/dev/nbd2", 00:13:51.604 "bdev_name": "Nvme2n1" 00:13:51.604 }, 00:13:51.604 { 00:13:51.604 "nbd_device": "/dev/nbd3", 00:13:51.604 "bdev_name": "Nvme2n2" 00:13:51.604 }, 00:13:51.604 { 00:13:51.604 "nbd_device": "/dev/nbd4", 00:13:51.604 "bdev_name": "Nvme2n3" 00:13:51.604 }, 00:13:51.604 { 00:13:51.604 "nbd_device": "/dev/nbd5", 00:13:51.604 "bdev_name": "Nvme3n1" 00:13:51.604 } 00:13:51.604 ]' 00:13:51.604 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:51.604 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:51.604 { 00:13:51.604 "nbd_device": "/dev/nbd0", 00:13:51.604 "bdev_name": "Nvme0n1" 00:13:51.604 }, 00:13:51.604 { 00:13:51.604 "nbd_device": "/dev/nbd1", 00:13:51.604 "bdev_name": "Nvme1n1" 00:13:51.604 }, 00:13:51.604 { 00:13:51.604 "nbd_device": "/dev/nbd2", 00:13:51.604 "bdev_name": "Nvme2n1" 00:13:51.604 }, 00:13:51.604 { 00:13:51.604 "nbd_device": "/dev/nbd3", 00:13:51.604 "bdev_name": "Nvme2n2" 00:13:51.604 }, 00:13:51.604 { 00:13:51.604 "nbd_device": "/dev/nbd4", 00:13:51.604 "bdev_name": "Nvme2n3" 00:13:51.604 }, 00:13:51.604 { 00:13:51.604 "nbd_device": "/dev/nbd5", 00:13:51.604 "bdev_name": "Nvme3n1" 00:13:51.604 } 00:13:51.604 ]' 00:13:51.604 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:51.604 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:51.604 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:51.604 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:51.604 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:51.604 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:51.604 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:51.604 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:51.862 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:51.862 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:51.862 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:51.862 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:51.862 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:51.862 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:51.862 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:51.862 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:51.862 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:51.862 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:52.120 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:52.378 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:52.378 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:52.378 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:52.378 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:52.378 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:52.378 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:52.378 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:52.378 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:52.378 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:52.378 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:52.636 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:52.636 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:52.636 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:52.636 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:52.636 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:52.636 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:52.636 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:52.636 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:52.636 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:52.636 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:52.636 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:52.636 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:52.636 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:52.636 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:52.636 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:52.636 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:52.636 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:52.636 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:52.636 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:52.894 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:52.894 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:52.894 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:52.894 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:52.894 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:52.894 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:52.894 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:52.894 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:52.894 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:52.894 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:53.153 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:53.153 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:53.153 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:53.153 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:53.153 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:53.153 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:53.153 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:53.153 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:53.153 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:53.153 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:53.153 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:53.410 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:53.410 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:53.410 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:53.410 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:53.410 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:53.410 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:53.410 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:53.410 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:53.410 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:53.410 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:53.410 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:53.410 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:53.410 17:14:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:53.410 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:53.410 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:53.410 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:53.410 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:53.411 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:53.411 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:53.411 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:53.411 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:53.411 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:53.411 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:53.411 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:53.411 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:53.411 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:53.411 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:53.411 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:13:53.669 /dev/nbd0 00:13:53.927 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:53.927 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:53.927 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:53.927 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:53.927 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:53.927 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:53.927 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:53.927 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:53.927 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:53.927 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:53.927 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.927 1+0 records in 00:13:53.927 1+0 records out 00:13:53.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555357 s, 7.4 MB/s 00:13:53.927 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.927 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:53.927 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.927 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:53.927 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:53.927 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:53.927 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:53.927 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:13:53.927 /dev/nbd1 00:13:54.185 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:54.185 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:54.185 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:54.185 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:54.185 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:54.185 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:54.185 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:54.185 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:54.185 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:54.186 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:54.186 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.186 1+0 records in 00:13:54.186 1+0 records out 00:13:54.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606007 s, 6.8 MB/s 00:13:54.186 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.186 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:54.186 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.186 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:54.186 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:54.186 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:54.186 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:54.186 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:13:54.443 /dev/nbd10 00:13:54.443 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:54.443 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:54.443 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:54.443 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:54.443 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:54.443 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:54.443 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:54.443 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:54.443 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:54.443 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:54.443 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.443 1+0 records in 00:13:54.443 1+0 records out 00:13:54.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000682351 s, 6.0 MB/s 00:13:54.443 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.443 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:54.443 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.443 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:54.443 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:54.443 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:54.443 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:54.443 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:13:54.701 /dev/nbd11 00:13:54.701 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:54.701 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:54.701 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:54.701 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:54.701 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:54.701 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:54.701 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:54.701 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:54.701 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:54.701 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:54.701 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.701 1+0 records in 00:13:54.701 1+0 records out 00:13:54.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508042 s, 8.1 MB/s 00:13:54.701 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.701 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:54.701 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.701 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:54.701 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:54.701 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:54.701 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:54.701 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:13:54.960 /dev/nbd12 00:13:54.960 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:54.960 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:54.960 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:54.960 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:54.960 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:54.960 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:54.960 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:54.960 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:54.960 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:54.960 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:54.960 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.960 1+0 records in 00:13:54.960 1+0 records out 00:13:54.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067219 s, 6.1 MB/s 00:13:54.960 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.960 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:54.960 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.960 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:54.960 17:14:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:54.960 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:54.960 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:54.960 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:13:55.235 /dev/nbd13 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.235 1+0 records in 00:13:55.235 1+0 records out 00:13:55.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000790623 s, 5.2 MB/s 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:55.235 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:55.493 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:55.493 { 00:13:55.493 "nbd_device": "/dev/nbd0", 00:13:55.493 "bdev_name": "Nvme0n1" 00:13:55.493 }, 00:13:55.493 { 00:13:55.493 "nbd_device": "/dev/nbd1", 00:13:55.494 "bdev_name": "Nvme1n1" 00:13:55.494 }, 00:13:55.494 { 00:13:55.494 "nbd_device": "/dev/nbd10", 00:13:55.494 "bdev_name": "Nvme2n1" 00:13:55.494 }, 00:13:55.494 { 00:13:55.494 "nbd_device": "/dev/nbd11", 00:13:55.494 "bdev_name": "Nvme2n2" 00:13:55.494 }, 00:13:55.494 { 00:13:55.494 "nbd_device": "/dev/nbd12", 00:13:55.494 "bdev_name": "Nvme2n3" 00:13:55.494 }, 00:13:55.494 { 00:13:55.494 "nbd_device": "/dev/nbd13", 00:13:55.494 "bdev_name": "Nvme3n1" 00:13:55.494 } 00:13:55.494 ]' 00:13:55.494 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:55.494 { 00:13:55.494 "nbd_device": "/dev/nbd0", 00:13:55.494 "bdev_name": "Nvme0n1" 00:13:55.494 }, 00:13:55.494 { 00:13:55.494 "nbd_device": "/dev/nbd1", 00:13:55.494 "bdev_name": "Nvme1n1" 00:13:55.494 }, 00:13:55.494 { 00:13:55.494 "nbd_device": "/dev/nbd10", 00:13:55.494 "bdev_name": "Nvme2n1" 00:13:55.494 }, 00:13:55.494 { 00:13:55.494 "nbd_device": "/dev/nbd11", 00:13:55.494 "bdev_name": "Nvme2n2" 00:13:55.494 }, 00:13:55.494 { 00:13:55.494 "nbd_device": "/dev/nbd12", 00:13:55.494 "bdev_name": "Nvme2n3" 00:13:55.494 }, 00:13:55.494 { 00:13:55.494 "nbd_device": "/dev/nbd13", 00:13:55.494 "bdev_name": "Nvme3n1" 00:13:55.494 } 00:13:55.494 ]' 00:13:55.494 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:55.494 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:55.494 /dev/nbd1 00:13:55.494 /dev/nbd10 00:13:55.494 /dev/nbd11 00:13:55.494 /dev/nbd12 00:13:55.494 /dev/nbd13' 00:13:55.494 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:55.494 /dev/nbd1 00:13:55.494 /dev/nbd10 00:13:55.494 /dev/nbd11 00:13:55.494 /dev/nbd12 00:13:55.494 /dev/nbd13' 00:13:55.494 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:55.494 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:55.494 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:55.494 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:55.494 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:55.494 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:55.494 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:55.494 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:55.494 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:55.494 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:55.494 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:55.494 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:55.494 256+0 records in 00:13:55.494 256+0 records out 00:13:55.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00762895 s, 137 MB/s 00:13:55.494 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:55.494 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:55.752 256+0 records in 00:13:55.752 256+0 records out 00:13:55.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151931 s, 6.9 MB/s 00:13:55.752 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:55.752 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:55.752 256+0 records in 00:13:55.752 256+0 records out 00:13:55.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165422 s, 6.3 MB/s 00:13:55.752 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:55.752 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:56.010 256+0 records in 00:13:56.010 256+0 records out 00:13:56.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172378 s, 6.1 MB/s 00:13:56.010 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:56.010 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:56.268 256+0 records in 00:13:56.268 256+0 records out 00:13:56.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173479 s, 6.0 MB/s 00:13:56.268 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:56.268 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:56.268 256+0 records in 00:13:56.268 256+0 records out 00:13:56.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158473 s, 6.6 MB/s 00:13:56.268 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:56.268 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:56.526 256+0 records in 00:13:56.526 256+0 records out 00:13:56.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173114 s, 6.1 MB/s 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.526 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:56.785 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:56.785 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:56.785 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:56.785 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.785 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.785 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:56.785 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:56.785 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.785 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.785 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:57.350 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:57.350 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:57.350 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:57.350 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.350 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.350 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:57.350 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:57.350 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.350 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.350 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:57.607 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:57.608 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:57.608 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:57.608 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.608 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.608 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:57.608 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:57.608 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.608 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.608 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:57.866 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:57.866 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:57.866 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:57.866 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.866 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.866 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:57.866 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:57.866 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.866 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.866 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:58.124 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:58.124 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:58.124 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:58.124 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.124 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.124 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:58.124 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:58.124 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.124 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.124 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:58.382 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:58.382 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:58.382 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:58.382 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.382 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.382 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:58.382 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:58.382 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.382 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:58.382 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:58.382 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:58.640 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:58.640 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:58.640 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:58.640 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:58.640 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:58.640 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:58.640 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:58.640 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:58.640 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:58.640 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:58.640 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:58.640 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:58.640 17:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:58.640 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:58.640 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:58.640 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:58.640 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:58.640 17:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:58.898 malloc_lvol_verify 00:13:58.898 17:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:59.155 f6748340-f7e4-4963-abb7-760cff31c66e 00:13:59.155 17:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:59.414 9cc96a4d-553f-4f30-b3e6-6a986a3c5d5a 00:13:59.414 17:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:59.671 /dev/nbd0 00:13:59.671 17:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:59.671 mke2fs 1.46.5 (30-Dec-2021) 00:13:59.671 Discarding device blocks: 0/4096 done 00:13:59.671 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:59.671 00:13:59.671 Allocating group tables: 0/1 done 00:13:59.671 Writing inode tables: 0/1 done 00:13:59.671 Creating journal (1024 blocks): done 00:13:59.671 Writing superblocks and filesystem accounting information: 0/1 done 00:13:59.671 00:13:59.671 17:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:59.671 17:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:59.671 17:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:59.671 17:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:59.671 17:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:59.671 17:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:59.671 17:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:59.671 17:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 65175 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 65175 ']' 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 65175 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65175 00:13:59.982 killing process with pid 65175 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65175' 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 65175 00:13:59.982 17:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 65175 00:14:01.352 17:14:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:01.352 00:14:01.352 real 0m12.849s 00:14:01.352 user 0m18.034s 00:14:01.352 sys 0m4.108s 00:14:01.352 ************************************ 00:14:01.352 END TEST bdev_nbd 00:14:01.352 ************************************ 00:14:01.352 17:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:01.352 17:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:01.352 17:14:47 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:14:01.352 skipping fio tests on NVMe due to multi-ns failures. 00:14:01.352 17:14:47 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:14:01.352 17:14:47 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:14:01.352 17:14:47 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:01.353 17:14:47 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:01.353 17:14:47 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:14:01.353 17:14:47 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:01.353 17:14:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:01.353 ************************************ 00:14:01.353 START TEST bdev_verify 00:14:01.353 ************************************ 00:14:01.353 17:14:47 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:01.353 [2024-07-24 17:14:47.463118] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:01.353 [2024-07-24 17:14:47.463362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65581 ] 00:14:01.610 [2024-07-24 17:14:47.639473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:01.868 [2024-07-24 17:14:47.859541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.868 [2024-07-24 17:14:47.859558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.433 Running I/O for 5 seconds... 00:14:07.719 00:14:07.719 Latency(us) 00:14:07.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.719 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:07.719 Verification LBA range: start 0x0 length 0xbd0bd 00:14:07.719 Nvme0n1 : 5.08 1537.22 6.00 0.00 0.00 83068.97 18469.24 76736.70 00:14:07.719 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:07.719 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:07.719 Nvme0n1 : 5.09 1560.58 6.10 0.00 0.00 81852.81 11379.43 93418.59 00:14:07.719 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:07.719 Verification LBA range: start 0x0 length 0xa0000 00:14:07.719 Nvme1n1 : 5.08 1536.69 6.00 0.00 0.00 82988.26 18588.39 71017.19 00:14:07.719 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:07.719 Verification LBA range: start 0xa0000 length 0xa0000 00:14:07.719 Nvme1n1 : 5.09 1559.96 6.09 0.00 0.00 81591.25 12034.79 84839.33 00:14:07.719 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:07.719 Verification LBA range: start 0x0 length 0x80000 00:14:07.719 Nvme2n1 : 5.08 1536.16 6.00 0.00 0.00 82903.18 17396.83 66250.94 00:14:07.719 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:07.719 Verification LBA range: start 0x80000 length 0x80000 00:14:07.719 Nvme2n1 : 5.09 1559.42 6.09 0.00 0.00 81433.44 12571.00 86745.83 00:14:07.719 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:07.719 Verification LBA range: start 0x0 length 0x80000 00:14:07.719 Nvme2n2 : 5.08 1535.68 6.00 0.00 0.00 82785.01 16801.05 69110.69 00:14:07.719 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:07.719 Verification LBA range: start 0x80000 length 0x80000 00:14:07.719 Nvme2n2 : 5.09 1558.93 6.09 0.00 0.00 81303.97 12511.42 88652.33 00:14:07.719 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:07.719 Verification LBA range: start 0x0 length 0x80000 00:14:07.719 Nvme2n3 : 5.09 1535.24 6.00 0.00 0.00 82656.01 16324.42 71970.44 00:14:07.719 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:07.719 Verification LBA range: start 0x80000 length 0x80000 00:14:07.719 Nvme2n3 : 5.09 1558.52 6.09 0.00 0.00 81194.10 12451.84 91988.71 00:14:07.719 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:07.719 Verification LBA range: start 0x0 length 0x20000 00:14:07.719 Nvme3n1 : 5.09 1534.64 5.99 0.00 0.00 82542.05 12094.37 76260.07 00:14:07.719 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:07.719 Verification LBA range: start 0x20000 length 0x20000 00:14:07.719 Nvme3n1 : 5.09 1558.09 6.09 0.00 0.00 81129.82 9592.09 94371.84 00:14:07.719 =================================================================================================================== 00:14:07.719 Total : 18571.14 72.54 0.00 0.00 82115.02 9592.09 94371.84 00:14:09.095 00:14:09.095 real 0m7.752s 00:14:09.095 user 0m14.011s 00:14:09.095 sys 0m0.350s 00:14:09.095 ************************************ 00:14:09.095 END TEST bdev_verify 00:14:09.095 ************************************ 00:14:09.095 17:14:55 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:09.095 17:14:55 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:09.095 17:14:55 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:09.095 17:14:55 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:14:09.095 17:14:55 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:09.095 17:14:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:09.095 ************************************ 00:14:09.095 START TEST bdev_verify_big_io 00:14:09.095 ************************************ 00:14:09.095 17:14:55 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:09.095 [2024-07-24 17:14:55.275311] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:09.095 [2024-07-24 17:14:55.275559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65679 ] 00:14:09.353 [2024-07-24 17:14:55.455784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:09.611 [2024-07-24 17:14:55.703531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.611 [2024-07-24 17:14:55.703533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.544 Running I/O for 5 seconds... 00:14:17.101 00:14:17.101 Latency(us) 00:14:17.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.101 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:17.101 Verification LBA range: start 0x0 length 0xbd0b 00:14:17.101 Nvme0n1 : 5.74 130.95 8.18 0.00 0.00 945299.04 26214.40 888429.85 00:14:17.101 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:17.101 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:17.101 Nvme0n1 : 5.58 126.08 7.88 0.00 0.00 963998.72 14596.65 991380.95 00:14:17.101 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:17.101 Verification LBA range: start 0x0 length 0xa000 00:14:17.101 Nvme1n1 : 5.75 130.01 8.13 0.00 0.00 923400.41 58386.62 831234.79 00:14:17.101 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:17.101 Verification LBA range: start 0xa000 length 0xa000 00:14:17.101 Nvme1n1 : 5.76 131.33 8.21 0.00 0.00 905905.96 51713.86 1006632.96 00:14:17.101 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:17.101 Verification LBA range: start 0x0 length 0x8000 00:14:17.101 Nvme2n1 : 5.75 133.59 8.35 0.00 0.00 884333.23 104857.60 857925.82 00:14:17.101 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:17.101 Verification LBA range: start 0x8000 length 0x8000 00:14:17.101 Nvme2n1 : 5.76 130.22 8.14 0.00 0.00 897091.24 48854.11 1570957.50 00:14:17.101 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:17.101 Verification LBA range: start 0x0 length 0x8000 00:14:17.101 Nvme2n2 : 5.75 133.51 8.34 0.00 0.00 859258.57 105810.85 884616.84 00:14:17.101 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:17.101 Verification LBA range: start 0x8000 length 0x8000 00:14:17.101 Nvme2n2 : 5.79 135.04 8.44 0.00 0.00 843455.41 28240.06 1609087.53 00:14:17.101 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:17.101 Verification LBA range: start 0x0 length 0x8000 00:14:17.101 Nvme2n3 : 5.80 143.34 8.96 0.00 0.00 786808.45 9949.56 907494.87 00:14:17.101 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:17.101 Verification LBA range: start 0x8000 length 0x8000 00:14:17.101 Nvme2n3 : 5.81 139.88 8.74 0.00 0.00 790885.78 17515.99 1647217.57 00:14:17.101 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:17.101 Verification LBA range: start 0x0 length 0x2000 00:14:17.101 Nvme3n1 : 5.82 150.12 9.38 0.00 0.00 731745.02 12153.95 918933.88 00:14:17.101 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:17.101 Verification LBA range: start 0x2000 length 0x2000 00:14:17.101 Nvme3n1 : 5.87 164.48 10.28 0.00 0.00 657663.89 606.95 1670095.59 00:14:17.101 =================================================================================================================== 00:14:17.101 Total : 1648.55 103.03 0.00 0.00 841914.58 606.95 1670095.59 00:14:18.471 00:14:18.471 real 0m9.119s 00:14:18.471 user 0m16.669s 00:14:18.471 sys 0m0.412s 00:14:18.471 ************************************ 00:14:18.471 END TEST bdev_verify_big_io 00:14:18.471 ************************************ 00:14:18.471 17:15:04 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:18.471 17:15:04 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.471 17:15:04 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:18.471 17:15:04 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:18.471 17:15:04 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:18.471 17:15:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:18.471 ************************************ 00:14:18.471 START TEST bdev_write_zeroes 00:14:18.471 ************************************ 00:14:18.471 17:15:04 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:18.471 [2024-07-24 17:15:04.417480] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:18.471 [2024-07-24 17:15:04.417670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65799 ] 00:14:18.471 [2024-07-24 17:15:04.574939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.729 [2024-07-24 17:15:04.798741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.295 Running I/O for 1 seconds... 00:14:20.670 00:14:20.670 Latency(us) 00:14:20.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.670 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:20.670 Nvme0n1 : 1.02 8745.67 34.16 0.00 0.00 14583.03 8638.84 27167.65 00:14:20.670 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:20.670 Nvme1n1 : 1.02 8731.25 34.11 0.00 0.00 14582.01 11498.59 21209.83 00:14:20.670 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:20.670 Nvme2n1 : 1.02 8717.66 34.05 0.00 0.00 14554.69 10962.39 19184.17 00:14:20.670 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:20.670 Nvme2n2 : 1.02 8753.49 34.19 0.00 0.00 14477.12 7685.59 17515.99 00:14:20.670 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:20.670 Nvme2n3 : 1.03 8740.11 34.14 0.00 0.00 14471.65 7477.06 17158.52 00:14:20.670 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:20.670 Nvme3n1 : 1.03 8727.10 34.09 0.00 0.00 14469.08 8102.63 17515.99 00:14:20.670 =================================================================================================================== 00:14:20.670 Total : 52415.28 204.75 0.00 0.00 14522.75 7477.06 27167.65 00:14:21.606 00:14:21.606 real 0m3.305s 00:14:21.606 user 0m2.930s 00:14:21.606 sys 0m0.256s 00:14:21.606 17:15:07 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:21.606 17:15:07 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:21.606 ************************************ 00:14:21.606 END TEST bdev_write_zeroes 00:14:21.606 ************************************ 00:14:21.606 17:15:07 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:21.606 17:15:07 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:21.606 17:15:07 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:21.606 17:15:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:21.606 ************************************ 00:14:21.606 START TEST bdev_json_nonenclosed 00:14:21.606 ************************************ 00:14:21.606 17:15:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:21.606 [2024-07-24 17:15:07.811898] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:21.606 [2024-07-24 17:15:07.812075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65852 ] 00:14:21.866 [2024-07-24 17:15:07.993512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.126 [2024-07-24 17:15:08.273010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.126 [2024-07-24 17:15:08.273146] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:22.126 [2024-07-24 17:15:08.273178] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:22.126 [2024-07-24 17:15:08.273195] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:22.691 00:14:22.691 real 0m1.010s 00:14:22.691 user 0m0.742s 00:14:22.691 sys 0m0.159s 00:14:22.691 ************************************ 00:14:22.691 END TEST bdev_json_nonenclosed 00:14:22.691 ************************************ 00:14:22.691 17:15:08 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:22.691 17:15:08 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:22.691 17:15:08 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:22.691 17:15:08 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:22.691 17:15:08 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:22.691 17:15:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:22.691 ************************************ 00:14:22.691 START TEST bdev_json_nonarray 00:14:22.691 ************************************ 00:14:22.691 17:15:08 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:22.691 [2024-07-24 17:15:08.872170] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:22.691 [2024-07-24 17:15:08.872405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65889 ] 00:14:22.950 [2024-07-24 17:15:09.048113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.209 [2024-07-24 17:15:09.287335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.209 [2024-07-24 17:15:09.287491] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:23.209 [2024-07-24 17:15:09.287523] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:23.209 [2024-07-24 17:15:09.287540] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:23.776 00:14:23.776 real 0m0.941s 00:14:23.776 user 0m0.669s 00:14:23.776 sys 0m0.164s 00:14:23.776 17:15:09 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:23.776 17:15:09 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:23.776 ************************************ 00:14:23.776 END TEST bdev_json_nonarray 00:14:23.776 ************************************ 00:14:23.776 17:15:09 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:14:23.776 17:15:09 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:14:23.776 17:15:09 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:14:23.776 17:15:09 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:23.776 17:15:09 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:14:23.776 17:15:09 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:23.776 17:15:09 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:23.776 17:15:09 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:14:23.776 17:15:09 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:14:23.776 17:15:09 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:14:23.776 17:15:09 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:14:23.776 00:14:23.776 real 0m44.606s 00:14:23.776 user 1m5.999s 00:14:23.776 sys 0m7.101s 00:14:23.776 17:15:09 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:23.776 ************************************ 00:14:23.776 END TEST blockdev_nvme 00:14:23.776 ************************************ 00:14:23.776 17:15:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:23.776 17:15:09 -- spdk/autotest.sh@217 -- # uname -s 00:14:23.776 17:15:09 -- spdk/autotest.sh@217 -- # [[ Linux == Linux ]] 00:14:23.776 17:15:09 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:14:23.776 17:15:09 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:23.776 17:15:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:23.776 17:15:09 -- common/autotest_common.sh@10 -- # set +x 00:14:23.776 ************************************ 00:14:23.776 START TEST blockdev_nvme_gpt 00:14:23.776 ************************************ 00:14:23.776 17:15:09 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:14:23.776 * Looking for test storage... 00:14:23.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:23.776 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:23.776 17:15:09 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:14:23.776 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:23.776 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:23.776 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:23.776 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:23.776 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:23.776 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:23.776 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:14:23.776 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:14:23.776 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:14:23.776 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:14:23.776 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:14:23.776 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:14:23.776 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:14:23.777 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:14:23.777 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:14:23.777 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:14:23.777 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:14:23.777 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:14:23.777 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:14:23.777 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:14:23.777 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:14:23.777 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:14:23.777 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=65965 00:14:23.777 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:23.777 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:23.777 17:15:09 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 65965 00:14:23.777 17:15:09 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 65965 ']' 00:14:23.777 17:15:09 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.777 17:15:09 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:23.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.777 17:15:09 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.777 17:15:09 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:23.777 17:15:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:24.035 [2024-07-24 17:15:10.072529] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:24.035 [2024-07-24 17:15:10.072822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65965 ] 00:14:24.035 [2024-07-24 17:15:10.258371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.293 [2024-07-24 17:15:10.511936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.228 17:15:11 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:25.228 17:15:11 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:14:25.228 17:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:14:25.228 17:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:14:25.228 17:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:25.486 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:25.745 Waiting for block devices as requested 00:14:25.745 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:25.745 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:26.003 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:26.003 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:31.271 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:31.271 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:14:31.271 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:14:31.271 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:14:31.271 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:14:31.271 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:31.272 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:14:31.272 BYT; 00:14:31.272 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:14:31.272 BYT; 00:14:31.272 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:14:31.272 17:15:17 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:14:31.272 17:15:17 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:14:31.272 17:15:17 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:14:31.272 17:15:17 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:14:31.272 17:15:17 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:14:31.272 17:15:17 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:14:31.272 17:15:17 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:14:31.272 17:15:17 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:14:31.272 17:15:17 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:14:31.272 17:15:17 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:14:31.272 17:15:17 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:14:31.272 17:15:17 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:14:31.272 17:15:17 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:14:31.272 17:15:17 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:14:31.272 17:15:17 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:14:31.272 17:15:17 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:14:31.272 17:15:17 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:14:31.272 17:15:17 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:14:31.272 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:14:32.206 The operation has completed successfully. 00:14:32.206 17:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:14:33.140 The operation has completed successfully. 00:14:33.140 17:15:19 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:33.706 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:34.272 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:34.272 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:34.272 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:34.272 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:34.272 17:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:14:34.272 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.272 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:34.272 [] 00:14:34.272 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.272 17:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:14:34.272 17:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:14:34.272 17:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:14:34.272 17:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:34.530 17:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:14:34.530 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.530 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:34.788 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.788 17:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:14:34.788 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.788 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:34.788 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.788 17:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:14:34.788 17:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:14:34.788 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.788 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:34.788 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.788 17:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:14:34.788 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.788 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:34.788 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.788 17:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:34.788 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.788 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:34.788 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.788 17:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:14:34.788 17:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:14:34.788 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.788 17:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:34.788 17:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:14:35.048 17:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.048 17:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:14:35.048 17:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:14:35.048 17:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "d1df96ad-4fab-4029-b380-dc094a21422d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d1df96ad-4fab-4029-b380-dc094a21422d",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "da50121a-da28-424b-93fa-1dd8d8996b5c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "da50121a-da28-424b-93fa-1dd8d8996b5c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "17db4c89-06e9-46ab-b8dd-591e48c39882"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "17db4c89-06e9-46ab-b8dd-591e48c39882",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "469194d4-8f68-41fb-a2d1-be61fc6c60da"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "469194d4-8f68-41fb-a2d1-be61fc6c60da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "2c0b9e4e-8821-47ca-8a06-344b27629351"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2c0b9e4e-8821-47ca-8a06-344b27629351",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:14:35.048 17:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:14:35.049 17:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:14:35.049 17:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:14:35.049 17:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 65965 00:14:35.049 17:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 65965 ']' 00:14:35.049 17:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 65965 00:14:35.049 17:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:14:35.049 17:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:35.049 17:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65965 00:14:35.049 killing process with pid 65965 00:14:35.049 17:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:35.049 17:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:35.049 17:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65965' 00:14:35.049 17:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 65965 00:14:35.049 17:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 65965 00:14:37.578 17:15:23 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:37.578 17:15:23 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:14:37.578 17:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:37.578 17:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:37.578 17:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:37.578 ************************************ 00:14:37.578 START TEST bdev_hello_world 00:14:37.578 ************************************ 00:14:37.578 17:15:23 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:14:37.578 [2024-07-24 17:15:23.470737] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:37.578 [2024-07-24 17:15:23.470913] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66597 ] 00:14:37.578 [2024-07-24 17:15:23.647383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.844 [2024-07-24 17:15:23.899495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.420 [2024-07-24 17:15:24.569926] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:38.420 [2024-07-24 17:15:24.569981] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:14:38.420 [2024-07-24 17:15:24.570022] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:38.420 [2024-07-24 17:15:24.573372] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:38.420 [2024-07-24 17:15:24.573755] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:38.420 [2024-07-24 17:15:24.573785] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:38.420 [2024-07-24 17:15:24.574017] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:38.420 00:14:38.420 [2024-07-24 17:15:24.574068] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:39.793 00:14:39.793 real 0m2.488s 00:14:39.793 user 0m2.066s 00:14:39.793 sys 0m0.310s 00:14:39.793 17:15:25 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:39.793 17:15:25 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:39.793 ************************************ 00:14:39.793 END TEST bdev_hello_world 00:14:39.793 ************************************ 00:14:39.793 17:15:25 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:14:39.793 17:15:25 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:39.793 17:15:25 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:39.793 17:15:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:39.793 ************************************ 00:14:39.793 START TEST bdev_bounds 00:14:39.793 ************************************ 00:14:39.793 17:15:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:14:39.793 Process bdevio pid: 66644 00:14:39.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.793 17:15:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=66644 00:14:39.793 17:15:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:39.793 17:15:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:39.793 17:15:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 66644' 00:14:39.793 17:15:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 66644 00:14:39.793 17:15:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 66644 ']' 00:14:39.793 17:15:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.793 17:15:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:39.793 17:15:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.793 17:15:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:39.793 17:15:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:39.793 [2024-07-24 17:15:25.997514] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:39.793 [2024-07-24 17:15:25.998061] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66644 ] 00:14:40.051 [2024-07-24 17:15:26.167397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:40.309 [2024-07-24 17:15:26.415030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.309 [2024-07-24 17:15:26.415174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.309 [2024-07-24 17:15:26.415203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.874 17:15:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:40.874 17:15:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:14:40.874 17:15:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:41.132 I/O targets: 00:14:41.132 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:41.132 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:14:41.132 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:14:41.132 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:41.132 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:41.132 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:41.132 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:41.132 00:14:41.132 00:14:41.132 CUnit - A unit testing framework for C - Version 2.1-3 00:14:41.132 http://cunit.sourceforge.net/ 00:14:41.132 00:14:41.132 00:14:41.132 Suite: bdevio tests on: Nvme3n1 00:14:41.132 Test: blockdev write read block ...passed 00:14:41.132 Test: blockdev write zeroes read block ...passed 00:14:41.132 Test: blockdev write zeroes read no split ...passed 00:14:41.132 Test: blockdev write zeroes read split ...passed 00:14:41.132 Test: blockdev write zeroes read split partial ...passed 00:14:41.132 Test: blockdev reset ...[2024-07-24 17:15:27.305468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:14:41.132 passed 00:14:41.132 Test: blockdev write read 8 blocks ...[2024-07-24 17:15:27.309359] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:41.132 passed 00:14:41.132 Test: blockdev write read size > 128k ...passed 00:14:41.132 Test: blockdev write read invalid size ...passed 00:14:41.132 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:41.132 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:41.132 Test: blockdev write read max offset ...passed 00:14:41.132 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:41.132 Test: blockdev writev readv 8 blocks ...passed 00:14:41.132 Test: blockdev writev readv 30 x 1block ...passed 00:14:41.132 Test: blockdev writev readv block ...passed 00:14:41.132 Test: blockdev writev readv size > 128k ...passed 00:14:41.132 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:41.132 Test: blockdev comparev and writev ...[2024-07-24 17:15:27.318163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x278006000 len:0x1000 00:14:41.132 [2024-07-24 17:15:27.318223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:14:41.132 passed 00:14:41.132 Test: blockdev nvme passthru rw ...passed 00:14:41.132 Test: blockdev nvme passthru vendor specific ...[2024-07-24 17:15:27.319261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:14:41.132 [2024-07-24 17:15:27.319302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:14:41.132 passed 00:14:41.132 Test: blockdev nvme admin passthru ...passed 00:14:41.132 Test: blockdev copy ...passed 00:14:41.132 Suite: bdevio tests on: Nvme2n3 00:14:41.132 Test: blockdev write read block ...passed 00:14:41.132 Test: blockdev write zeroes read block ...passed 00:14:41.132 Test: blockdev write zeroes read no split ...passed 00:14:41.132 Test: blockdev write zeroes read split ...passed 00:14:41.390 Test: blockdev write zeroes read split partial ...passed 00:14:41.390 Test: blockdev reset ...[2024-07-24 17:15:27.386760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:14:41.390 [2024-07-24 17:15:27.391511] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:41.390 passed 00:14:41.390 Test: blockdev write read 8 blocks ...passed 00:14:41.390 Test: blockdev write read size > 128k ...passed 00:14:41.390 Test: blockdev write read invalid size ...passed 00:14:41.390 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:41.390 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:41.390 Test: blockdev write read max offset ...passed 00:14:41.390 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:41.390 Test: blockdev writev readv 8 blocks ...passed 00:14:41.390 Test: blockdev writev readv 30 x 1block ...passed 00:14:41.390 Test: blockdev writev readv block ...passed 00:14:41.390 Test: blockdev writev readv size > 128k ...passed 00:14:41.390 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:41.390 Test: blockdev comparev and writev ...[2024-07-24 17:15:27.400517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27a03c000 len:0x1000 00:14:41.390 [2024-07-24 17:15:27.400576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:14:41.390 passed 00:14:41.390 Test: blockdev nvme passthru rw ...passed 00:14:41.390 Test: blockdev nvme passthru vendor specific ...[2024-07-24 17:15:27.401459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:14:41.390 [2024-07-24 17:15:27.401498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:14:41.390 passed 00:14:41.390 Test: blockdev nvme admin passthru ...passed 00:14:41.390 Test: blockdev copy ...passed 00:14:41.390 Suite: bdevio tests on: Nvme2n2 00:14:41.390 Test: blockdev write read block ...passed 00:14:41.390 Test: blockdev write zeroes read block ...passed 00:14:41.390 Test: blockdev write zeroes read no split ...passed 00:14:41.390 Test: blockdev write zeroes read split ...passed 00:14:41.390 Test: blockdev write zeroes read split partial ...passed 00:14:41.390 Test: blockdev reset ...[2024-07-24 17:15:27.470217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:14:41.390 [2024-07-24 17:15:27.474626] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:41.390 passed 00:14:41.390 Test: blockdev write read 8 blocks ...passed 00:14:41.390 Test: blockdev write read size > 128k ...passed 00:14:41.390 Test: blockdev write read invalid size ...passed 00:14:41.390 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:41.390 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:41.390 Test: blockdev write read max offset ...passed 00:14:41.390 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:41.390 Test: blockdev writev readv 8 blocks ...passed 00:14:41.390 Test: blockdev writev readv 30 x 1block ...passed 00:14:41.390 Test: blockdev writev readv block ...passed 00:14:41.390 Test: blockdev writev readv size > 128k ...passed 00:14:41.390 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:41.391 Test: blockdev comparev and writev ...[2024-07-24 17:15:27.483691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27a036000 len:0x1000 00:14:41.391 [2024-07-24 17:15:27.483748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:14:41.391 passed 00:14:41.391 Test: blockdev nvme passthru rw ...passed 00:14:41.391 Test: blockdev nvme passthru vendor specific ...[2024-07-24 17:15:27.484669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:14:41.391 [2024-07-24 17:15:27.484707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:14:41.391 passed 00:14:41.391 Test: blockdev nvme admin passthru ...passed 00:14:41.391 Test: blockdev copy ...passed 00:14:41.391 Suite: bdevio tests on: Nvme2n1 00:14:41.391 Test: blockdev write read block ...passed 00:14:41.391 Test: blockdev write zeroes read block ...passed 00:14:41.391 Test: blockdev write zeroes read no split ...passed 00:14:41.391 Test: blockdev write zeroes read split ...passed 00:14:41.391 Test: blockdev write zeroes read split partial ...passed 00:14:41.391 Test: blockdev reset ...[2024-07-24 17:15:27.549715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:14:41.391 [2024-07-24 17:15:27.554028] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:41.391 passed 00:14:41.391 Test: blockdev write read 8 blocks ...passed 00:14:41.391 Test: blockdev write read size > 128k ...passed 00:14:41.391 Test: blockdev write read invalid size ...passed 00:14:41.391 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:41.391 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:41.391 Test: blockdev write read max offset ...passed 00:14:41.391 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:41.391 Test: blockdev writev readv 8 blocks ...passed 00:14:41.391 Test: blockdev writev readv 30 x 1block ...passed 00:14:41.391 Test: blockdev writev readv block ...passed 00:14:41.391 Test: blockdev writev readv size > 128k ...passed 00:14:41.391 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:41.391 Test: blockdev comparev and writev ...[2024-07-24 17:15:27.562522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27a032000 len:0x1000 00:14:41.391 [2024-07-24 17:15:27.562591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:14:41.391 passed 00:14:41.391 Test: blockdev nvme passthru rw ...passed 00:14:41.391 Test: blockdev nvme passthru vendor specific ...passed 00:14:41.391 Test: blockdev nvme admin passthru ...[2024-07-24 17:15:27.563701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:14:41.391 [2024-07-24 17:15:27.563764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:14:41.391 passed 00:14:41.391 Test: blockdev copy ...passed 00:14:41.391 Suite: bdevio tests on: Nvme1n1p2 00:14:41.391 Test: blockdev write read block ...passed 00:14:41.391 Test: blockdev write zeroes read block ...passed 00:14:41.391 Test: blockdev write zeroes read no split ...passed 00:14:41.391 Test: blockdev write zeroes read split ...passed 00:14:41.649 Test: blockdev write zeroes read split partial ...passed 00:14:41.649 Test: blockdev reset ...[2024-07-24 17:15:27.634180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:14:41.649 [2024-07-24 17:15:27.638296] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:41.649 passed 00:14:41.649 Test: blockdev write read 8 blocks ...passed 00:14:41.649 Test: blockdev write read size > 128k ...passed 00:14:41.649 Test: blockdev write read invalid size ...passed 00:14:41.649 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:41.649 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:41.649 Test: blockdev write read max offset ...passed 00:14:41.649 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:41.649 Test: blockdev writev readv 8 blocks ...passed 00:14:41.649 Test: blockdev writev readv 30 x 1block ...passed 00:14:41.649 Test: blockdev writev readv block ...passed 00:14:41.649 Test: blockdev writev readv size > 128k ...passed 00:14:41.649 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:41.649 Test: blockdev comparev and writev ...[2024-07-24 17:15:27.647575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x27a02e000 len:0x1000 00:14:41.649 [2024-07-24 17:15:27.647630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:14:41.649 passed 00:14:41.649 Test: blockdev nvme passthru rw ...passed 00:14:41.649 Test: blockdev nvme passthru vendor specific ...passed 00:14:41.649 Test: blockdev nvme admin passthru ...passed 00:14:41.649 Test: blockdev copy ...passed 00:14:41.649 Suite: bdevio tests on: Nvme1n1p1 00:14:41.649 Test: blockdev write read block ...passed 00:14:41.649 Test: blockdev write zeroes read block ...passed 00:14:41.649 Test: blockdev write zeroes read no split ...passed 00:14:41.649 Test: blockdev write zeroes read split ...passed 00:14:41.649 Test: blockdev write zeroes read split partial ...passed 00:14:41.649 Test: blockdev reset ...[2024-07-24 17:15:27.704782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:14:41.649 [2024-07-24 17:15:27.709183] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:41.649 passed 00:14:41.649 Test: blockdev write read 8 blocks ...passed 00:14:41.649 Test: blockdev write read size > 128k ...passed 00:14:41.649 Test: blockdev write read invalid size ...passed 00:14:41.649 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:41.649 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:41.649 Test: blockdev write read max offset ...passed 00:14:41.649 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:41.649 Test: blockdev writev readv 8 blocks ...passed 00:14:41.649 Test: blockdev writev readv 30 x 1block ...passed 00:14:41.649 Test: blockdev writev readv block ...passed 00:14:41.649 Test: blockdev writev readv size > 128k ...passed 00:14:41.649 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:41.649 Test: blockdev comparev and writev ...[2024-07-24 17:15:27.719628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x27ae0e000 len:0x1000 00:14:41.649 [2024-07-24 17:15:27.719708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:14:41.649 passed 00:14:41.649 Test: blockdev nvme passthru rw ...passed 00:14:41.649 Test: blockdev nvme passthru vendor specific ...passed 00:14:41.649 Test: blockdev nvme admin passthru ...passed 00:14:41.649 Test: blockdev copy ...passed 00:14:41.649 Suite: bdevio tests on: Nvme0n1 00:14:41.649 Test: blockdev write read block ...passed 00:14:41.649 Test: blockdev write zeroes read block ...passed 00:14:41.649 Test: blockdev write zeroes read no split ...passed 00:14:41.649 Test: blockdev write zeroes read split ...passed 00:14:41.649 Test: blockdev write zeroes read split partial ...passed 00:14:41.649 Test: blockdev reset ...[2024-07-24 17:15:27.780550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:14:41.649 [2024-07-24 17:15:27.784402] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:41.649 passed 00:14:41.650 Test: blockdev write read 8 blocks ...passed 00:14:41.650 Test: blockdev write read size > 128k ...passed 00:14:41.650 Test: blockdev write read invalid size ...passed 00:14:41.650 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:41.650 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:41.650 Test: blockdev write read max offset ...passed 00:14:41.650 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:41.650 Test: blockdev writev readv 8 blocks ...passed 00:14:41.650 Test: blockdev writev readv 30 x 1block ...passed 00:14:41.650 Test: blockdev writev readv block ...passed 00:14:41.650 Test: blockdev writev readv size > 128k ...passed 00:14:41.650 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:41.650 Test: blockdev comparev and writev ...passed 00:14:41.650 Test: blockdev nvme passthru rw ...[2024-07-24 17:15:27.792629] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:14:41.650 separate metadata which is not supported yet. 00:14:41.650 passed 00:14:41.650 Test: blockdev nvme passthru vendor specific ...passed 00:14:41.650 Test: blockdev nvme admin passthru ...[2024-07-24 17:15:27.793329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:14:41.650 [2024-07-24 17:15:27.793385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:14:41.650 passed 00:14:41.650 Test: blockdev copy ...passed 00:14:41.650 00:14:41.650 Run Summary: Type Total Ran Passed Failed Inactive 00:14:41.650 suites 7 7 n/a 0 0 00:14:41.650 tests 161 161 161 0 0 00:14:41.650 asserts 1025 1025 1025 0 n/a 00:14:41.650 00:14:41.650 Elapsed time = 1.510 seconds 00:14:41.650 0 00:14:41.650 17:15:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 66644 00:14:41.650 17:15:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 66644 ']' 00:14:41.650 17:15:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 66644 00:14:41.650 17:15:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:14:41.650 17:15:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:41.650 17:15:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66644 00:14:41.650 17:15:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:41.650 17:15:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:41.650 17:15:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66644' 00:14:41.650 killing process with pid 66644 00:14:41.650 17:15:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 66644 00:14:41.650 17:15:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 66644 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:43.045 00:14:43.045 real 0m3.020s 00:14:43.045 user 0m7.357s 00:14:43.045 sys 0m0.456s 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:43.045 ************************************ 00:14:43.045 END TEST bdev_bounds 00:14:43.045 ************************************ 00:14:43.045 17:15:28 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:14:43.045 17:15:28 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:43.045 17:15:28 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:43.045 17:15:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:43.045 ************************************ 00:14:43.045 START TEST bdev_nbd 00:14:43.045 ************************************ 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=66709 00:14:43.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 66709 /var/tmp/spdk-nbd.sock 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 66709 ']' 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:43.045 17:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:43.045 [2024-07-24 17:15:29.104122] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:43.045 [2024-07-24 17:15:29.104311] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.303 [2024-07-24 17:15:29.285963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.303 [2024-07-24 17:15:29.539002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.237 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:44.237 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:14:44.237 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:14:44.237 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:44.237 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:44.237 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:44.237 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:14:44.237 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:44.237 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:44.237 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:44.237 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:44.237 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:44.237 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:44.237 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:44.237 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:44.496 1+0 records in 00:14:44.496 1+0 records out 00:14:44.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000837081 s, 4.9 MB/s 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:44.496 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:44.754 1+0 records in 00:14:44.754 1+0 records out 00:14:44.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710167 s, 5.8 MB/s 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:44.754 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.013 1+0 records in 00:14:45.013 1+0 records out 00:14:45.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611961 s, 6.7 MB/s 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:45.013 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.272 1+0 records in 00:14:45.272 1+0 records out 00:14:45.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00238073 s, 1.7 MB/s 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:45.272 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:14:45.529 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:45.529 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:45.530 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:45.530 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:14:45.530 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:45.530 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:45.530 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:45.530 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:14:45.530 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:45.530 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:45.530 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:45.530 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.530 1+0 records in 00:14:45.530 1+0 records out 00:14:45.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000732177 s, 5.6 MB/s 00:14:45.530 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.530 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:45.530 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.530 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:45.530 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:45.530 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:45.530 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:45.530 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:14:45.787 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:45.787 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:45.787 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:45.787 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:14:45.787 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:45.787 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:45.787 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:45.787 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:14:45.787 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:45.787 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:45.787 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:45.787 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.787 1+0 records in 00:14:45.787 1+0 records out 00:14:45.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00139584 s, 2.9 MB/s 00:14:45.787 17:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.787 17:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:45.787 17:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.787 17:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:45.787 17:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:45.787 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:45.787 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:45.787 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:46.353 1+0 records in 00:14:46.353 1+0 records out 00:14:46.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138852 s, 2.9 MB/s 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:46.353 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:46.611 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:46.612 { 00:14:46.612 "nbd_device": "/dev/nbd0", 00:14:46.612 "bdev_name": "Nvme0n1" 00:14:46.612 }, 00:14:46.612 { 00:14:46.612 "nbd_device": "/dev/nbd1", 00:14:46.612 "bdev_name": "Nvme1n1p1" 00:14:46.612 }, 00:14:46.612 { 00:14:46.612 "nbd_device": "/dev/nbd2", 00:14:46.612 "bdev_name": "Nvme1n1p2" 00:14:46.612 }, 00:14:46.612 { 00:14:46.612 "nbd_device": "/dev/nbd3", 00:14:46.612 "bdev_name": "Nvme2n1" 00:14:46.612 }, 00:14:46.612 { 00:14:46.612 "nbd_device": "/dev/nbd4", 00:14:46.612 "bdev_name": "Nvme2n2" 00:14:46.612 }, 00:14:46.612 { 00:14:46.612 "nbd_device": "/dev/nbd5", 00:14:46.612 "bdev_name": "Nvme2n3" 00:14:46.612 }, 00:14:46.612 { 00:14:46.612 "nbd_device": "/dev/nbd6", 00:14:46.612 "bdev_name": "Nvme3n1" 00:14:46.612 } 00:14:46.612 ]' 00:14:46.612 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:46.612 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:46.612 { 00:14:46.612 "nbd_device": "/dev/nbd0", 00:14:46.612 "bdev_name": "Nvme0n1" 00:14:46.612 }, 00:14:46.612 { 00:14:46.612 "nbd_device": "/dev/nbd1", 00:14:46.612 "bdev_name": "Nvme1n1p1" 00:14:46.612 }, 00:14:46.612 { 00:14:46.612 "nbd_device": "/dev/nbd2", 00:14:46.612 "bdev_name": "Nvme1n1p2" 00:14:46.612 }, 00:14:46.612 { 00:14:46.612 "nbd_device": "/dev/nbd3", 00:14:46.612 "bdev_name": "Nvme2n1" 00:14:46.612 }, 00:14:46.612 { 00:14:46.612 "nbd_device": "/dev/nbd4", 00:14:46.612 "bdev_name": "Nvme2n2" 00:14:46.612 }, 00:14:46.612 { 00:14:46.612 "nbd_device": "/dev/nbd5", 00:14:46.612 "bdev_name": "Nvme2n3" 00:14:46.612 }, 00:14:46.612 { 00:14:46.612 "nbd_device": "/dev/nbd6", 00:14:46.612 "bdev_name": "Nvme3n1" 00:14:46.612 } 00:14:46.612 ]' 00:14:46.612 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:46.612 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:14:46.612 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:46.612 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:14:46.612 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:46.612 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:46.612 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.612 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:46.870 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:46.870 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:46.870 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:46.870 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.870 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.870 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:46.870 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:46.870 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.870 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.870 17:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:47.129 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:47.129 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:47.129 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:47.129 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:47.129 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:47.129 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:47.129 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:47.129 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:47.129 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:47.129 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:47.386 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:47.386 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:47.386 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:47.386 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:47.386 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:47.386 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:47.386 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:47.386 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:47.386 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:47.386 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:47.643 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:47.643 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:47.643 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:47.643 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:47.643 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:47.643 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:47.643 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:47.643 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:47.643 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:47.643 17:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:47.901 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:47.901 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:47.901 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:47.901 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:47.901 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:47.901 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:47.901 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:47.901 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:47.901 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:47.901 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:48.159 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:48.159 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:48.159 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:48.159 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:48.159 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:48.159 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:48.159 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:48.159 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:48.159 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.159 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:14:48.416 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:14:48.416 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:14:48.416 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:14:48.416 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:48.416 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:48.416 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:14:48.416 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:48.416 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:48.416 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:48.416 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:48.416 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:48.674 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:48.674 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:48.963 17:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:14:49.244 /dev/nbd0 00:14:49.244 17:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:49.244 17:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:49.244 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:49.244 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:49.244 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:49.244 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:49.244 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:49.244 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:49.244 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:49.244 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:49.244 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:49.244 1+0 records in 00:14:49.244 1+0 records out 00:14:49.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506607 s, 8.1 MB/s 00:14:49.244 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.244 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:49.244 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.244 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:49.244 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:49.244 17:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:49.244 17:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:49.244 17:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:14:49.502 /dev/nbd1 00:14:49.502 17:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:49.502 17:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:49.502 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:49.502 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:49.502 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:49.502 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:49.502 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:49.502 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:49.502 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:49.502 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:49.502 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:49.502 1+0 records in 00:14:49.502 1+0 records out 00:14:49.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105175 s, 3.9 MB/s 00:14:49.502 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.502 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:49.502 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.502 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:49.502 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:49.502 17:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:49.502 17:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:49.502 17:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:14:49.760 /dev/nbd10 00:14:49.760 17:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:49.760 17:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:49.760 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:14:49.760 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:49.760 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:49.760 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:49.760 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:14:49.760 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:49.760 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:49.760 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:49.760 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:49.760 1+0 records in 00:14:49.760 1+0 records out 00:14:49.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105525 s, 3.9 MB/s 00:14:49.760 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.760 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:49.760 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.760 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:49.760 17:15:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:49.760 17:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:49.760 17:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:49.760 17:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:14:50.018 /dev/nbd11 00:14:50.018 17:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:50.018 17:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:50.018 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:14:50.018 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:50.018 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:50.019 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:50.019 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:14:50.019 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:50.019 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:50.019 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:50.019 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:50.019 1+0 records in 00:14:50.019 1+0 records out 00:14:50.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000950046 s, 4.3 MB/s 00:14:50.019 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.019 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:50.019 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.019 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:50.019 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:50.019 17:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:50.019 17:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:50.019 17:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:14:50.277 /dev/nbd12 00:14:50.277 17:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:50.277 17:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:50.277 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:14:50.277 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:50.277 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:50.277 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:50.277 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:14:50.277 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:50.277 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:50.277 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:50.277 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:50.277 1+0 records in 00:14:50.277 1+0 records out 00:14:50.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000781104 s, 5.2 MB/s 00:14:50.277 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.277 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:50.277 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.277 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:50.277 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:50.277 17:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:50.277 17:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:50.277 17:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:14:50.536 /dev/nbd13 00:14:50.536 17:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:50.536 17:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:50.536 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:14:50.536 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:50.536 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:50.536 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:50.536 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:14:50.536 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:50.536 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:50.536 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:50.536 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:50.536 1+0 records in 00:14:50.536 1+0 records out 00:14:50.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000769105 s, 5.3 MB/s 00:14:50.536 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.536 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:50.536 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.536 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:50.536 17:15:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:50.536 17:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:50.536 17:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:50.536 17:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:14:50.794 /dev/nbd14 00:14:50.794 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:51.053 1+0 records in 00:14:51.053 1+0 records out 00:14:51.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122428 s, 3.3 MB/s 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:51.053 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:51.312 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:51.312 { 00:14:51.312 "nbd_device": "/dev/nbd0", 00:14:51.312 "bdev_name": "Nvme0n1" 00:14:51.312 }, 00:14:51.312 { 00:14:51.312 "nbd_device": "/dev/nbd1", 00:14:51.312 "bdev_name": "Nvme1n1p1" 00:14:51.312 }, 00:14:51.312 { 00:14:51.312 "nbd_device": "/dev/nbd10", 00:14:51.312 "bdev_name": "Nvme1n1p2" 00:14:51.312 }, 00:14:51.312 { 00:14:51.312 "nbd_device": "/dev/nbd11", 00:14:51.312 "bdev_name": "Nvme2n1" 00:14:51.312 }, 00:14:51.312 { 00:14:51.312 "nbd_device": "/dev/nbd12", 00:14:51.312 "bdev_name": "Nvme2n2" 00:14:51.312 }, 00:14:51.312 { 00:14:51.312 "nbd_device": "/dev/nbd13", 00:14:51.312 "bdev_name": "Nvme2n3" 00:14:51.312 }, 00:14:51.312 { 00:14:51.312 "nbd_device": "/dev/nbd14", 00:14:51.312 "bdev_name": "Nvme3n1" 00:14:51.312 } 00:14:51.312 ]' 00:14:51.312 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:51.312 { 00:14:51.312 "nbd_device": "/dev/nbd0", 00:14:51.312 "bdev_name": "Nvme0n1" 00:14:51.312 }, 00:14:51.312 { 00:14:51.312 "nbd_device": "/dev/nbd1", 00:14:51.312 "bdev_name": "Nvme1n1p1" 00:14:51.312 }, 00:14:51.312 { 00:14:51.312 "nbd_device": "/dev/nbd10", 00:14:51.312 "bdev_name": "Nvme1n1p2" 00:14:51.312 }, 00:14:51.312 { 00:14:51.312 "nbd_device": "/dev/nbd11", 00:14:51.312 "bdev_name": "Nvme2n1" 00:14:51.312 }, 00:14:51.312 { 00:14:51.312 "nbd_device": "/dev/nbd12", 00:14:51.312 "bdev_name": "Nvme2n2" 00:14:51.312 }, 00:14:51.312 { 00:14:51.312 "nbd_device": "/dev/nbd13", 00:14:51.312 "bdev_name": "Nvme2n3" 00:14:51.312 }, 00:14:51.312 { 00:14:51.312 "nbd_device": "/dev/nbd14", 00:14:51.312 "bdev_name": "Nvme3n1" 00:14:51.312 } 00:14:51.312 ]' 00:14:51.312 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:51.312 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:51.312 /dev/nbd1 00:14:51.312 /dev/nbd10 00:14:51.312 /dev/nbd11 00:14:51.312 /dev/nbd12 00:14:51.312 /dev/nbd13 00:14:51.312 /dev/nbd14' 00:14:51.312 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:51.312 /dev/nbd1 00:14:51.312 /dev/nbd10 00:14:51.312 /dev/nbd11 00:14:51.312 /dev/nbd12 00:14:51.312 /dev/nbd13 00:14:51.312 /dev/nbd14' 00:14:51.312 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:51.312 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:14:51.312 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:14:51.312 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:14:51.312 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:14:51.312 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:14:51.312 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:51.312 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:51.312 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:51.312 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:51.312 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:51.312 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:51.312 256+0 records in 00:14:51.312 256+0 records out 00:14:51.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00766853 s, 137 MB/s 00:14:51.312 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:51.312 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:51.571 256+0 records in 00:14:51.571 256+0 records out 00:14:51.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.194951 s, 5.4 MB/s 00:14:51.571 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:51.571 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:51.571 256+0 records in 00:14:51.571 256+0 records out 00:14:51.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188022 s, 5.6 MB/s 00:14:51.571 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:51.571 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:51.830 256+0 records in 00:14:51.830 256+0 records out 00:14:51.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191208 s, 5.5 MB/s 00:14:51.830 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:51.830 17:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:52.089 256+0 records in 00:14:52.089 256+0 records out 00:14:52.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153991 s, 6.8 MB/s 00:14:52.089 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:52.089 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:52.089 256+0 records in 00:14:52.089 256+0 records out 00:14:52.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163689 s, 6.4 MB/s 00:14:52.089 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:52.089 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:52.348 256+0 records in 00:14:52.348 256+0 records out 00:14:52.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.206352 s, 5.1 MB/s 00:14:52.348 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:52.348 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:14:52.607 256+0 records in 00:14:52.608 256+0 records out 00:14:52.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191737 s, 5.5 MB/s 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:52.608 17:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:52.867 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:52.867 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:52.867 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:52.867 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:52.867 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:52.867 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:52.867 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:52.867 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:52.867 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:52.867 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:53.125 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:53.125 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:53.125 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:53.125 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.125 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.125 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:53.125 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:53.125 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.125 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:53.125 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:53.384 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:53.384 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:53.384 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:53.384 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.384 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.384 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:53.384 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:53.384 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.384 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:53.384 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:53.643 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:53.643 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:53.643 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:53.643 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.643 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.643 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:53.643 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:53.643 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.643 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:53.643 17:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:53.901 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:53.902 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:53.902 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:53.902 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.902 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.902 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:53.902 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:53.902 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.902 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:53.902 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:54.160 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:54.160 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:54.160 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:54.160 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:54.160 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:54.160 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:54.160 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:54.160 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:54.160 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:54.160 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:14:54.419 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:14:54.419 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:14:54.419 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:14:54.419 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:54.419 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:54.419 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:14:54.419 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:54.419 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:54.419 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:54.419 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:54.419 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:54.677 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:54.677 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:54.677 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:54.937 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:54.937 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:54.937 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:54.937 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:54.937 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:54.937 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:54.937 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:54.937 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:54.937 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:54.937 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:14:54.937 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:54.937 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:54.937 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:14:54.937 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:14:54.937 17:15:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:55.196 malloc_lvol_verify 00:14:55.196 17:15:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:55.196 39a19b80-8061-4211-925d-28ce669a63d6 00:14:55.196 17:15:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:55.454 5335220d-bca1-456c-bb27-f942672ec58c 00:14:55.454 17:15:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:55.713 /dev/nbd0 00:14:55.713 17:15:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:14:55.713 mke2fs 1.46.5 (30-Dec-2021) 00:14:55.713 Discarding device blocks: 0/4096 done 00:14:55.713 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:55.713 00:14:55.713 Allocating group tables: 0/1 done 00:14:55.713 Writing inode tables: 0/1 done 00:14:55.713 Creating journal (1024 blocks): done 00:14:55.713 Writing superblocks and filesystem accounting information: 0/1 done 00:14:55.713 00:14:55.713 17:15:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:14:55.713 17:15:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:55.713 17:15:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:55.713 17:15:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:55.713 17:15:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:55.713 17:15:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:55.713 17:15:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:55.713 17:15:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 66709 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 66709 ']' 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 66709 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66709 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:55.972 killing process with pid 66709 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66709' 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 66709 00:14:55.972 17:15:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 66709 00:14:57.347 17:15:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:57.347 00:14:57.347 real 0m14.383s 00:14:57.347 user 0m20.110s 00:14:57.347 sys 0m4.797s 00:14:57.347 ************************************ 00:14:57.347 END TEST bdev_nbd 00:14:57.347 ************************************ 00:14:57.347 17:15:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:57.347 17:15:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:57.347 17:15:43 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:14:57.347 17:15:43 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:14:57.347 17:15:43 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:14:57.347 skipping fio tests on NVMe due to multi-ns failures. 00:14:57.347 17:15:43 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:14:57.347 17:15:43 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:57.347 17:15:43 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:57.347 17:15:43 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:14:57.347 17:15:43 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:57.347 17:15:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:57.347 ************************************ 00:14:57.347 START TEST bdev_verify 00:14:57.347 ************************************ 00:14:57.347 17:15:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:57.347 [2024-07-24 17:15:43.533295] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:57.347 [2024-07-24 17:15:43.533494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67152 ] 00:14:57.604 [2024-07-24 17:15:43.708463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:57.862 [2024-07-24 17:15:43.937946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.862 [2024-07-24 17:15:43.937963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.433 Running I/O for 5 seconds... 00:15:03.699 00:15:03.699 Latency(us) 00:15:03.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.699 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:03.699 Verification LBA range: start 0x0 length 0xbd0bd 00:15:03.699 Nvme0n1 : 5.06 1390.56 5.43 0.00 0.00 91779.47 20971.52 87699.08 00:15:03.699 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:03.699 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:03.699 Nvme0n1 : 5.07 1363.93 5.33 0.00 0.00 93513.63 24784.52 88175.71 00:15:03.699 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:03.699 Verification LBA range: start 0x0 length 0x4ff80 00:15:03.699 Nvme1n1p1 : 5.06 1390.10 5.43 0.00 0.00 91615.71 24069.59 82932.83 00:15:03.699 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:03.699 Verification LBA range: start 0x4ff80 length 0x4ff80 00:15:03.699 Nvme1n1p1 : 5.07 1362.58 5.32 0.00 0.00 93389.90 26452.71 81026.33 00:15:03.699 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:03.699 Verification LBA range: start 0x0 length 0x4ff7f 00:15:03.699 Nvme1n1p2 : 5.07 1389.59 5.43 0.00 0.00 91467.85 23235.49 81026.33 00:15:03.699 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:03.699 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:15:03.699 Nvme1n1p2 : 5.07 1361.99 5.32 0.00 0.00 93193.67 24427.05 75783.45 00:15:03.699 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:03.699 Verification LBA range: start 0x0 length 0x80000 00:15:03.699 Nvme2n1 : 5.07 1389.12 5.43 0.00 0.00 91318.68 22520.55 78643.20 00:15:03.699 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:03.699 Verification LBA range: start 0x80000 length 0x80000 00:15:03.699 Nvme2n1 : 5.08 1361.44 5.32 0.00 0.00 93038.90 23592.96 74830.20 00:15:03.699 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:03.699 Verification LBA range: start 0x0 length 0x80000 00:15:03.699 Nvme2n2 : 5.07 1388.01 5.42 0.00 0.00 91186.31 21567.30 77213.32 00:15:03.699 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:03.699 Verification LBA range: start 0x80000 length 0x80000 00:15:03.699 Nvme2n2 : 5.08 1360.87 5.32 0.00 0.00 92877.89 22758.87 80073.08 00:15:03.699 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:03.699 Verification LBA range: start 0x0 length 0x80000 00:15:03.699 Nvme2n3 : 5.08 1397.96 5.46 0.00 0.00 90453.48 3455.53 80549.70 00:15:03.699 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:03.699 Verification LBA range: start 0x80000 length 0x80000 00:15:03.699 Nvme2n3 : 5.09 1370.14 5.35 0.00 0.00 92212.09 3932.16 81502.95 00:15:03.699 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:03.699 Verification LBA range: start 0x0 length 0x20000 00:15:03.699 Nvme3n1 : 5.09 1407.26 5.50 0.00 0.00 89732.24 7298.33 85315.96 00:15:03.699 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:03.699 Verification LBA range: start 0x20000 length 0x20000 00:15:03.699 Nvme3n1 : 5.11 1378.54 5.38 0.00 0.00 91547.49 7923.90 84362.71 00:15:03.699 =================================================================================================================== 00:15:03.699 Total : 19312.08 75.44 0.00 0.00 91939.74 3455.53 88175.71 00:15:05.074 00:15:05.074 real 0m7.760s 00:15:05.074 user 0m14.031s 00:15:05.074 sys 0m0.361s 00:15:05.074 17:15:51 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:05.074 17:15:51 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:05.074 ************************************ 00:15:05.074 END TEST bdev_verify 00:15:05.075 ************************************ 00:15:05.075 17:15:51 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:05.075 17:15:51 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:15:05.075 17:15:51 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:05.075 17:15:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:05.075 ************************************ 00:15:05.075 START TEST bdev_verify_big_io 00:15:05.075 ************************************ 00:15:05.075 17:15:51 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:05.333 [2024-07-24 17:15:51.325643] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:15:05.333 [2024-07-24 17:15:51.325827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67256 ] 00:15:05.333 [2024-07-24 17:15:51.491214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:05.591 [2024-07-24 17:15:51.726136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.591 [2024-07-24 17:15:51.726153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.526 Running I/O for 5 seconds... 00:15:13.081 00:15:13.081 Latency(us) 00:15:13.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.082 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:13.082 Verification LBA range: start 0x0 length 0xbd0b 00:15:13.082 Nvme0n1 : 5.77 116.57 7.29 0.00 0.00 1057854.77 26095.24 1426063.36 00:15:13.082 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:13.082 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:13.082 Nvme0n1 : 5.91 92.12 5.76 0.00 0.00 1330153.00 26571.87 1845493.76 00:15:13.082 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:13.082 Verification LBA range: start 0x0 length 0x4ff8 00:15:13.082 Nvme1n1p1 : 5.77 127.82 7.99 0.00 0.00 936763.68 74353.57 957063.91 00:15:13.082 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:13.082 Verification LBA range: start 0x4ff8 length 0x4ff8 00:15:13.082 Nvme1n1p1 : 5.86 131.58 8.22 0.00 0.00 911110.98 96278.34 899868.86 00:15:13.082 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:13.082 Verification LBA range: start 0x0 length 0x4ff7 00:15:13.082 Nvme1n1p2 : 5.67 129.66 8.10 0.00 0.00 914111.20 74830.20 972315.93 00:15:13.082 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:13.082 Verification LBA range: start 0x4ff7 length 0x4ff7 00:15:13.082 Nvme1n1p2 : 5.77 133.00 8.31 0.00 0.00 889729.09 108193.98 850299.81 00:15:13.082 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:13.082 Verification LBA range: start 0x0 length 0x8000 00:15:13.082 Nvme2n1 : 5.77 133.07 8.32 0.00 0.00 869497.64 97231.59 907494.87 00:15:13.082 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:13.082 Verification LBA range: start 0x8000 length 0x8000 00:15:13.082 Nvme2n1 : 5.86 134.94 8.43 0.00 0.00 853287.61 84362.71 854112.81 00:15:13.082 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:13.082 Verification LBA range: start 0x0 length 0x8000 00:15:13.082 Nvme2n2 : 5.86 134.53 8.41 0.00 0.00 834236.69 82932.83 922746.88 00:15:13.082 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:13.082 Verification LBA range: start 0x8000 length 0x8000 00:15:13.082 Nvme2n2 : 5.90 133.65 8.35 0.00 0.00 848334.03 25141.99 1715851.64 00:15:13.082 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:13.082 Verification LBA range: start 0x0 length 0x8000 00:15:13.082 Nvme2n3 : 5.90 148.41 9.28 0.00 0.00 748405.08 7864.32 934185.89 00:15:13.082 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:13.082 Verification LBA range: start 0x8000 length 0x8000 00:15:13.082 Nvme2n3 : 5.90 138.12 8.63 0.00 0.00 802941.33 7208.96 1738729.66 00:15:13.082 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:13.082 Verification LBA range: start 0x0 length 0x2000 00:15:13.082 Nvme3n1 : 5.90 151.85 9.49 0.00 0.00 712072.38 7804.74 957063.91 00:15:13.082 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:13.082 Verification LBA range: start 0x2000 length 0x2000 00:15:13.082 Nvme3n1 : 5.92 143.45 8.97 0.00 0.00 751656.83 4319.42 1769233.69 00:15:13.082 =================================================================================================================== 00:15:13.082 Total : 1848.77 115.55 0.00 0.00 873996.69 4319.42 1845493.76 00:15:14.456 00:15:14.456 real 0m9.087s 00:15:14.456 user 0m16.566s 00:15:14.456 sys 0m0.371s 00:15:14.456 17:16:00 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:14.456 ************************************ 00:15:14.456 END TEST bdev_verify_big_io 00:15:14.456 ************************************ 00:15:14.456 17:16:00 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.456 17:16:00 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:14.456 17:16:00 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:14.456 17:16:00 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:14.456 17:16:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:14.456 ************************************ 00:15:14.456 START TEST bdev_write_zeroes 00:15:14.456 ************************************ 00:15:14.456 17:16:00 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:14.456 [2024-07-24 17:16:00.466733] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:15:14.456 [2024-07-24 17:16:00.466916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67381 ] 00:15:14.456 [2024-07-24 17:16:00.627485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.715 [2024-07-24 17:16:00.845521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.314 Running I/O for 1 seconds... 00:15:16.687 00:15:16.687 Latency(us) 00:15:16.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.687 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.687 Nvme0n1 : 1.02 7924.19 30.95 0.00 0.00 16094.45 12153.95 29669.93 00:15:16.687 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.687 Nvme1n1p1 : 1.02 7912.03 30.91 0.00 0.00 16088.93 12571.00 28955.00 00:15:16.687 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.687 Nvme1n1p2 : 1.02 7903.41 30.87 0.00 0.00 16012.78 12034.79 27405.96 00:15:16.687 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.687 Nvme2n1 : 1.02 7936.11 31.00 0.00 0.00 15927.65 9651.67 24069.59 00:15:16.687 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.687 Nvme2n2 : 1.03 7926.26 30.96 0.00 0.00 15891.55 9711.24 22282.24 00:15:16.687 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.687 Nvme2n3 : 1.03 7970.23 31.13 0.00 0.00 15786.69 5213.09 20733.21 00:15:16.687 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:16.687 Nvme3n1 : 1.03 7898.39 30.85 0.00 0.00 15889.58 5302.46 28955.00 00:15:16.687 =================================================================================================================== 00:15:16.687 Total : 55470.61 216.68 0.00 0.00 15955.38 5213.09 29669.93 00:15:17.633 00:15:17.633 real 0m3.273s 00:15:17.633 user 0m2.868s 00:15:17.633 sys 0m0.284s 00:15:17.633 ************************************ 00:15:17.633 END TEST bdev_write_zeroes 00:15:17.633 ************************************ 00:15:17.633 17:16:03 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:17.633 17:16:03 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:17.633 17:16:03 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:17.633 17:16:03 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:17.633 17:16:03 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:17.633 17:16:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:17.633 ************************************ 00:15:17.633 START TEST bdev_json_nonenclosed 00:15:17.633 ************************************ 00:15:17.633 17:16:03 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:17.633 [2024-07-24 17:16:03.792185] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:15:17.633 [2024-07-24 17:16:03.792357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67434 ] 00:15:17.891 [2024-07-24 17:16:03.949085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.147 [2024-07-24 17:16:04.169742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.147 [2024-07-24 17:16:04.169893] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:18.147 [2024-07-24 17:16:04.169925] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:18.147 [2024-07-24 17:16:04.169944] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:18.404 00:15:18.404 real 0m0.853s 00:15:18.404 user 0m0.616s 00:15:18.404 sys 0m0.132s 00:15:18.404 ************************************ 00:15:18.404 END TEST bdev_json_nonenclosed 00:15:18.404 ************************************ 00:15:18.404 17:16:04 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:18.404 17:16:04 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:18.404 17:16:04 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:18.404 17:16:04 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:18.404 17:16:04 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:18.404 17:16:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:18.404 ************************************ 00:15:18.404 START TEST bdev_json_nonarray 00:15:18.404 ************************************ 00:15:18.404 17:16:04 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:18.662 [2024-07-24 17:16:04.723586] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:15:18.662 [2024-07-24 17:16:04.723808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67461 ] 00:15:18.662 [2024-07-24 17:16:04.898789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.919 [2024-07-24 17:16:05.119605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.919 [2024-07-24 17:16:05.119774] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:18.919 [2024-07-24 17:16:05.119807] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:18.919 [2024-07-24 17:16:05.119825] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:19.485 00:15:19.485 real 0m0.895s 00:15:19.485 user 0m0.635s 00:15:19.485 sys 0m0.152s 00:15:19.485 ************************************ 00:15:19.485 END TEST bdev_json_nonarray 00:15:19.485 ************************************ 00:15:19.485 17:16:05 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:19.485 17:16:05 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:19.485 17:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:15:19.485 17:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:15:19.485 17:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:15:19.485 17:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:19.485 17:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:19.485 17:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:19.485 ************************************ 00:15:19.485 START TEST bdev_gpt_uuid 00:15:19.485 ************************************ 00:15:19.485 17:16:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:15:19.485 17:16:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:15:19.485 17:16:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:15:19.485 17:16:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67491 00:15:19.485 17:16:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:19.485 17:16:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 67491 00:15:19.485 17:16:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 67491 ']' 00:15:19.485 17:16:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.485 17:16:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:19.485 17:16:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:19.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.485 17:16:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.485 17:16:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:19.485 17:16:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:15:19.759 [2024-07-24 17:16:05.735043] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:15:19.759 [2024-07-24 17:16:05.735274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67491 ] 00:15:19.759 [2024-07-24 17:16:05.911583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.017 [2024-07-24 17:16:06.173953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.948 17:16:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:20.948 17:16:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:15:20.948 17:16:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:20.948 17:16:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.948 17:16:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:15:21.205 Some configs were skipped because the RPC state that can call them passed over. 00:15:21.205 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.205 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:15:21.205 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.205 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:15:21.205 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.205 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:15:21.205 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.205 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:15:21.205 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.205 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:15:21.205 { 00:15:21.205 "name": "Nvme1n1p1", 00:15:21.205 "aliases": [ 00:15:21.205 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:15:21.205 ], 00:15:21.205 "product_name": "GPT Disk", 00:15:21.205 "block_size": 4096, 00:15:21.205 "num_blocks": 655104, 00:15:21.205 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:15:21.205 "assigned_rate_limits": { 00:15:21.205 "rw_ios_per_sec": 0, 00:15:21.205 "rw_mbytes_per_sec": 0, 00:15:21.205 "r_mbytes_per_sec": 0, 00:15:21.205 "w_mbytes_per_sec": 0 00:15:21.205 }, 00:15:21.205 "claimed": false, 00:15:21.205 "zoned": false, 00:15:21.205 "supported_io_types": { 00:15:21.205 "read": true, 00:15:21.205 "write": true, 00:15:21.205 "unmap": true, 00:15:21.205 "flush": true, 00:15:21.205 "reset": true, 00:15:21.205 "nvme_admin": false, 00:15:21.205 "nvme_io": false, 00:15:21.205 "nvme_io_md": false, 00:15:21.205 "write_zeroes": true, 00:15:21.205 "zcopy": false, 00:15:21.205 "get_zone_info": false, 00:15:21.205 "zone_management": false, 00:15:21.205 "zone_append": false, 00:15:21.205 "compare": true, 00:15:21.205 "compare_and_write": false, 00:15:21.205 "abort": true, 00:15:21.205 "seek_hole": false, 00:15:21.205 "seek_data": false, 00:15:21.205 "copy": true, 00:15:21.205 "nvme_iov_md": false 00:15:21.205 }, 00:15:21.205 "driver_specific": { 00:15:21.205 "gpt": { 00:15:21.205 "base_bdev": "Nvme1n1", 00:15:21.205 "offset_blocks": 256, 00:15:21.205 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:15:21.205 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:15:21.205 "partition_name": "SPDK_TEST_first" 00:15:21.205 } 00:15:21.205 } 00:15:21.205 } 00:15:21.205 ]' 00:15:21.205 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:15:21.205 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:15:21.205 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:15:21.205 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:15:21.205 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:15:21.463 { 00:15:21.463 "name": "Nvme1n1p2", 00:15:21.463 "aliases": [ 00:15:21.463 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:15:21.463 ], 00:15:21.463 "product_name": "GPT Disk", 00:15:21.463 "block_size": 4096, 00:15:21.463 "num_blocks": 655103, 00:15:21.463 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:15:21.463 "assigned_rate_limits": { 00:15:21.463 "rw_ios_per_sec": 0, 00:15:21.463 "rw_mbytes_per_sec": 0, 00:15:21.463 "r_mbytes_per_sec": 0, 00:15:21.463 "w_mbytes_per_sec": 0 00:15:21.463 }, 00:15:21.463 "claimed": false, 00:15:21.463 "zoned": false, 00:15:21.463 "supported_io_types": { 00:15:21.463 "read": true, 00:15:21.463 "write": true, 00:15:21.463 "unmap": true, 00:15:21.463 "flush": true, 00:15:21.463 "reset": true, 00:15:21.463 "nvme_admin": false, 00:15:21.463 "nvme_io": false, 00:15:21.463 "nvme_io_md": false, 00:15:21.463 "write_zeroes": true, 00:15:21.463 "zcopy": false, 00:15:21.463 "get_zone_info": false, 00:15:21.463 "zone_management": false, 00:15:21.463 "zone_append": false, 00:15:21.463 "compare": true, 00:15:21.463 "compare_and_write": false, 00:15:21.463 "abort": true, 00:15:21.463 "seek_hole": false, 00:15:21.463 "seek_data": false, 00:15:21.463 "copy": true, 00:15:21.463 "nvme_iov_md": false 00:15:21.463 }, 00:15:21.463 "driver_specific": { 00:15:21.463 "gpt": { 00:15:21.463 "base_bdev": "Nvme1n1", 00:15:21.463 "offset_blocks": 655360, 00:15:21.463 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:15:21.463 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:15:21.463 "partition_name": "SPDK_TEST_second" 00:15:21.463 } 00:15:21.463 } 00:15:21.463 } 00:15:21.463 ]' 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 67491 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 67491 ']' 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 67491 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67491 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:21.463 killing process with pid 67491 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67491' 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 67491 00:15:21.463 17:16:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 67491 00:15:23.991 00:15:23.991 real 0m4.107s 00:15:23.991 user 0m4.289s 00:15:23.991 sys 0m0.607s 00:15:23.991 ************************************ 00:15:23.991 END TEST bdev_gpt_uuid 00:15:23.991 ************************************ 00:15:23.991 17:16:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:23.991 17:16:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:15:23.991 17:16:09 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:15:23.991 17:16:09 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:15:23.991 17:16:09 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:15:23.991 17:16:09 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:23.991 17:16:09 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:23.991 17:16:09 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:15:23.991 17:16:09 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:15:23.991 17:16:09 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:15:23.991 17:16:09 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:23.991 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:24.248 Waiting for block devices as requested 00:15:24.248 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:24.248 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:24.506 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:24.506 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:29.774 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:29.774 17:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:15:29.774 17:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:15:30.033 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:15:30.033 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:15:30.033 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:15:30.033 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:15:30.033 17:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:15:30.033 00:15:30.033 real 1m6.250s 00:15:30.033 user 1m23.223s 00:15:30.033 sys 0m10.766s 00:15:30.033 17:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:30.033 ************************************ 00:15:30.033 END TEST blockdev_nvme_gpt 00:15:30.033 ************************************ 00:15:30.033 17:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:30.033 17:16:16 -- spdk/autotest.sh@220 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:15:30.033 17:16:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:30.033 17:16:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:30.033 17:16:16 -- common/autotest_common.sh@10 -- # set +x 00:15:30.033 ************************************ 00:15:30.033 START TEST nvme 00:15:30.033 ************************************ 00:15:30.033 17:16:16 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:15:30.033 * Looking for test storage... 00:15:30.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:30.033 17:16:16 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:30.600 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:31.179 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:31.179 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:31.179 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:31.179 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:31.438 17:16:17 nvme -- nvme/nvme.sh@79 -- # uname 00:15:31.438 17:16:17 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:15:31.438 17:16:17 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:15:31.438 17:16:17 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:15:31.438 17:16:17 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:15:31.438 17:16:17 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:15:31.438 17:16:17 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:15:31.438 17:16:17 nvme -- common/autotest_common.sh@1071 -- # stubpid=68134 00:15:31.438 Waiting for stub to ready for secondary processes... 00:15:31.438 17:16:17 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:15:31.438 17:16:17 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:15:31.438 17:16:17 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:15:31.438 17:16:17 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/68134 ]] 00:15:31.438 17:16:17 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:15:31.438 [2024-07-24 17:16:17.551469] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:15:31.438 [2024-07-24 17:16:17.551707] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:15:32.374 17:16:18 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:15:32.374 17:16:18 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/68134 ]] 00:15:32.374 17:16:18 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:15:33.309 [2024-07-24 17:16:19.254374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:33.309 [2024-07-24 17:16:19.488306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.309 [2024-07-24 17:16:19.488432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.309 [2024-07-24 17:16:19.488446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:33.309 17:16:19 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:15:33.309 17:16:19 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/68134 ]] 00:15:33.309 17:16:19 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:15:33.309 [2024-07-24 17:16:19.507894] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:15:33.309 [2024-07-24 17:16:19.507946] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:15:33.309 [2024-07-24 17:16:19.518184] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:15:33.309 [2024-07-24 17:16:19.518453] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:15:33.309 [2024-07-24 17:16:19.523344] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:15:33.309 [2024-07-24 17:16:19.523538] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:15:33.309 [2024-07-24 17:16:19.523611] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:15:33.309 [2024-07-24 17:16:19.525844] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:15:33.309 [2024-07-24 17:16:19.526060] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:15:33.309 [2024-07-24 17:16:19.526140] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:15:33.309 [2024-07-24 17:16:19.528781] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:15:33.309 [2024-07-24 17:16:19.528972] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:15:33.309 [2024-07-24 17:16:19.529050] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:15:33.309 [2024-07-24 17:16:19.529106] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:15:33.309 [2024-07-24 17:16:19.529155] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:15:34.682 17:16:20 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:15:34.682 done. 00:15:34.682 17:16:20 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:15:34.682 17:16:20 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:15:34.682 17:16:20 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:15:34.682 17:16:20 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:34.682 17:16:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:34.682 ************************************ 00:15:34.682 START TEST nvme_reset 00:15:34.682 ************************************ 00:15:34.682 17:16:20 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:15:34.682 Initializing NVMe Controllers 00:15:34.682 Skipping QEMU NVMe SSD at 0000:00:10.0 00:15:34.682 Skipping QEMU NVMe SSD at 0000:00:11.0 00:15:34.682 Skipping QEMU NVMe SSD at 0000:00:13.0 00:15:34.682 Skipping QEMU NVMe SSD at 0000:00:12.0 00:15:34.682 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:15:34.682 00:15:34.682 real 0m0.359s 00:15:34.682 user 0m0.121s 00:15:34.682 sys 0m0.181s 00:15:34.682 17:16:20 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:34.682 17:16:20 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:15:34.682 ************************************ 00:15:34.682 END TEST nvme_reset 00:15:34.682 ************************************ 00:15:34.940 17:16:20 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:15:34.940 17:16:20 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:34.940 17:16:20 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:34.940 17:16:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:34.940 ************************************ 00:15:34.940 START TEST nvme_identify 00:15:34.940 ************************************ 00:15:34.940 17:16:20 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:15:34.940 17:16:20 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:15:34.940 17:16:20 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:15:34.940 17:16:20 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:15:34.940 17:16:20 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:15:34.940 17:16:20 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:15:34.940 17:16:20 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:15:34.940 17:16:20 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:34.940 17:16:20 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:34.940 17:16:20 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:15:34.940 17:16:20 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:15:34.940 17:16:20 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:34.940 17:16:20 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:15:35.201 ===================================================== 00:15:35.201 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:35.201 ===================================================== 00:15:35.201 Controller Capabilities/Features 00:15:35.201 ================================ 00:15:35.201 Vendor ID: 1b36 00:15:35.201 Subsystem Vendor ID: 1af4 00:15:35.201 Serial Number: 12340 00:15:35.201 Model Number: QEMU NVMe Ctrl 00:15:35.201 Firmware Version: 8.0.0 00:15:35.201 Recommended Arb Burst: 6 00:15:35.201 IEEE OUI Identifier: 00 54 52 00:15:35.201 Multi-path I/O 00:15:35.201 May have multiple subsystem ports: No 00:15:35.201 May have multiple controllers: No 00:15:35.201 Associated with SR-IOV VF: No 00:15:35.201 Max Data Transfer Size: 524288 00:15:35.201 Max Number of Namespaces: 256 00:15:35.201 Max Number of I/O Queues: 64 00:15:35.201 NVMe Specification Version (VS): 1.4 00:15:35.201 NVMe Specification Version (Identify): 1.4 00:15:35.201 Maximum Queue Entries: 2048 00:15:35.201 Contiguous Queues Required: Yes 00:15:35.201 Arbitration Mechanisms Supported 00:15:35.201 Weighted Round Robin: Not Supported 00:15:35.202 Vendor Specific: Not Supported 00:15:35.202 Reset Timeout: 7500 ms 00:15:35.202 Doorbell Stride: 4 bytes 00:15:35.202 NVM Subsystem Reset: Not Supported 00:15:35.202 Command Sets Supported 00:15:35.202 NVM Command Set: Supported 00:15:35.202 Boot Partition: Not Supported 00:15:35.202 Memory Page Size Minimum: 4096 bytes 00:15:35.202 Memory Page Size Maximum: 65536 bytes 00:15:35.202 Persistent Memory Region: Not Supported 00:15:35.202 Optional Asynchronous Events Supported 00:15:35.202 Namespace Attribute Notices: Supported 00:15:35.202 Firmware Activation Notices: Not Supported 00:15:35.202 ANA Change Notices: Not Supported 00:15:35.202 PLE Aggregate Log Change Notices: Not Supported 00:15:35.202 LBA Status Info Alert Notices: Not Supported 00:15:35.202 EGE Aggregate Log Change Notices: Not Supported 00:15:35.202 Normal NVM Subsystem Shutdown event: Not Supported 00:15:35.202 Zone Descriptor Change Notices: Not Supported 00:15:35.202 Discovery Log Change Notices: Not Supported 00:15:35.202 Controller Attributes 00:15:35.202 128-bit Host Identifier: Not Supported 00:15:35.202 Non-Operational Permissive Mode: Not Supported 00:15:35.202 NVM Sets: Not Supported 00:15:35.202 Read Recovery Levels: Not Supported 00:15:35.202 Endurance Groups: Not Supported 00:15:35.202 Predictable Latency Mode: Not Supported 00:15:35.202 Traffic Based Keep ALive: Not Supported 00:15:35.202 Namespace Granularity: Not Supported 00:15:35.202 SQ Associations: Not Supported 00:15:35.202 UUID List: Not Supported 00:15:35.202 Multi-Domain Subsystem: Not Supported 00:15:35.202 Fixed Capacity Management: Not Supported 00:15:35.202 Variable Capacity Management: Not Supported 00:15:35.202 Delete Endurance Group: Not Supported 00:15:35.202 Delete NVM Set: Not Supported 00:15:35.202 Extended LBA Formats Supported: Supported 00:15:35.202 Flexible Data Placement Supported: Not Supported 00:15:35.202 00:15:35.202 Controller Memory Buffer Support 00:15:35.202 ================================ 00:15:35.202 Supported: No 00:15:35.202 00:15:35.202 Persistent Memory Region Support 00:15:35.202 ================================ 00:15:35.202 Supported: No 00:15:35.202 00:15:35.202 Admin Command Set Attributes 00:15:35.202 ============================ 00:15:35.202 Security Send/Receive: Not Supported 00:15:35.202 Format NVM: Supported 00:15:35.202 Firmware Activate/Download: Not Supported 00:15:35.202 Namespace Management: Supported 00:15:35.202 Device Self-Test: Not Supported 00:15:35.202 Directives: Supported 00:15:35.202 NVMe-MI: Not Supported 00:15:35.202 Virtualization Management: Not Supported 00:15:35.202 Doorbell Buffer Config: Supported 00:15:35.202 Get LBA Status Capability: Not Supported 00:15:35.202 Command & Feature Lockdown Capability: Not Supported 00:15:35.202 Abort Command Limit: 4 00:15:35.202 Async Event Request Limit: 4 00:15:35.202 Number of Firmware Slots: N/A 00:15:35.202 Firmware Slot 1 Read-Only: N/A 00:15:35.202 Firmware Activation Without Reset: N/A 00:15:35.202 Multiple Update Detection Support: N/A 00:15:35.202 Firmware Update Granularity: No Information Provided 00:15:35.202 Per-Namespace SMART Log: Yes 00:15:35.202 Asymmetric Namespace Access Log Page: Not Supported 00:15:35.202 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:15:35.202 Command Effects Log Page: Supported 00:15:35.202 Get Log Page Extended Data: Supported 00:15:35.202 Telemetry Log Pages: Not Supported 00:15:35.202 Persistent Event Log Pages: Not Supported 00:15:35.202 Supported Log Pages Log Page: May Support 00:15:35.202 Commands Supported & Effects Log Page: Not Supported 00:15:35.202 Feature Identifiers & Effects Log Page:May Support 00:15:35.202 NVMe-MI Commands & Effects Log Page: May Support 00:15:35.202 Data Area 4 for Telemetry Log: Not Supported 00:15:35.202 Error Log Page Entries Supported: 1 00:15:35.202 Keep Alive: Not Supported 00:15:35.202 00:15:35.202 NVM Command Set Attributes 00:15:35.202 ========================== 00:15:35.202 Submission Queue Entry Size 00:15:35.202 Max: 64 00:15:35.202 Min: 64 00:15:35.202 Completion Queue Entry Size 00:15:35.202 Max: 16 00:15:35.202 Min: 16 00:15:35.202 Number of Namespaces: 256 00:15:35.202 Compare Command: Supported 00:15:35.202 Write Uncorrectable Command: Not Supported 00:15:35.202 Dataset Management Command: Supported 00:15:35.202 Write Zeroes Command: Supported 00:15:35.202 Set Features Save Field: Supported 00:15:35.202 Reservations: Not Supported 00:15:35.202 Timestamp: Supported 00:15:35.202 Copy: Supported 00:15:35.202 Volatile Write Cache: Present 00:15:35.202 Atomic Write Unit (Normal): 1 00:15:35.202 Atomic Write Unit (PFail): 1 00:15:35.202 Atomic Compare & Write Unit: 1 00:15:35.202 Fused Compare & Write: Not Supported 00:15:35.202 Scatter-Gather List 00:15:35.202 SGL Command Set: Supported 00:15:35.202 SGL Keyed: Not Supported 00:15:35.202 SGL Bit Bucket Descriptor: Not Supported 00:15:35.202 SGL Metadata Pointer: Not Supported 00:15:35.202 Oversized SGL: Not Supported 00:15:35.202 SGL Metadata Address: Not Supported 00:15:35.202 SGL Offset: Not Supported 00:15:35.202 Transport SGL Data Block: Not Supported 00:15:35.202 Replay Protected Memory Block: Not Supported 00:15:35.202 00:15:35.202 Firmware Slot Information 00:15:35.202 ========================= 00:15:35.202 Active slot: 1 00:15:35.202 Slot 1 Firmware Revision: 1.0 00:15:35.202 00:15:35.202 00:15:35.202 Commands Supported and Effects 00:15:35.202 ============================== 00:15:35.202 Admin Commands 00:15:35.202 -------------- 00:15:35.202 Delete I/O Submission Queue (00h): Supported 00:15:35.202 Create I/O Submission Queue (01h): Supported 00:15:35.202 Get Log Page (02h): Supported 00:15:35.202 Delete I/O Completion Queue (04h): Supported 00:15:35.202 Create I/O Completion Queue (05h): Supported 00:15:35.202 Identify (06h): Supported 00:15:35.202 Abort (08h): Supported 00:15:35.202 Set Features (09h): Supported 00:15:35.202 Get Features (0Ah): Supported 00:15:35.202 Asynchronous Event Request (0Ch): Supported 00:15:35.202 Namespace Attachment (15h): Supported NS-Inventory-Change 00:15:35.202 Directive Send (19h): Supported 00:15:35.202 Directive Receive (1Ah): Supported 00:15:35.202 Virtualization Management (1Ch): Supported 00:15:35.202 Doorbell Buffer Config (7Ch): Supported 00:15:35.202 Format NVM (80h): Supported LBA-Change 00:15:35.202 I/O Commands 00:15:35.202 ------------ 00:15:35.202 Flush (00h): Supported LBA-Change 00:15:35.202 Write (01h): Supported LBA-Change 00:15:35.202 Read (02h): Supported 00:15:35.202 Compare (05h): Supported 00:15:35.202 Write Zeroes (08h): Supported LBA-Change 00:15:35.202 Dataset Management (09h): Supported LBA-Change 00:15:35.202 Unknown (0Ch): Supported 00:15:35.202 Unknown (12h): Supported 00:15:35.202 Copy (19h): Supported LBA-Change 00:15:35.202 Unknown (1Dh): Supported LBA-Change 00:15:35.202 00:15:35.202 Error Log 00:15:35.202 ========= 00:15:35.202 00:15:35.202 Arbitration 00:15:35.202 =========== 00:15:35.202 Arbitration Burst: no limit 00:15:35.202 00:15:35.202 Power Management 00:15:35.202 ================ 00:15:35.202 Number of Power States: 1 00:15:35.202 Current Power State: Power State #0 00:15:35.202 Power State #0: 00:15:35.202 Max Power: 25.00 W 00:15:35.202 Non-Operational State: Operational 00:15:35.202 Entry Latency: 16 microseconds 00:15:35.202 Exit Latency: 4 microseconds 00:15:35.202 Relative Read Throughput: 0 00:15:35.202 Relative Read Latency: 0 00:15:35.202 Relative Write Throughput: 0 00:15:35.202 Relative Write Latency: 0 00:15:35.202 Idle Power: Not Reported 00:15:35.202 Active Power: Not Reported 00:15:35.202 Non-Operational Permissive Mode: Not Supported 00:15:35.202 00:15:35.202 Health Information 00:15:35.202 ================== 00:15:35.202 Critical Warnings: 00:15:35.202 Available Spare Space: OK 00:15:35.202 Temperature: OK 00:15:35.202 Device Reliability: OK 00:15:35.202 Read Only: No 00:15:35.202 Volatile Memory Backup: OK 00:15:35.202 Current Temperature: 323 Kelvin (50 Celsius) 00:15:35.202 Temperature Threshold: 343 Kelvin (70 Celsius) 00:15:35.203 Available Spare: 0% 00:15:35.203 Available Spare Threshold: 0% 00:15:35.203 Life Percentage Used: 0% 00:15:35.203 Data Units Read: 697 00:15:35.203 Data Units Written: 589 00:15:35.203 Host Read Commands: 34128 00:15:35.203 Host Write Commands: 33166 00:15:35.203 Controller Busy Time: 0 minutes 00:15:35.203 Power Cycles: 0 00:15:35.203 Power On Hours: 0 hours 00:15:35.203 Unsafe Shutdowns: 0 00:15:35.203 Unrecoverable Media Errors: 0 00:15:35.203 Lifetime Error Log Entries: 0 00:15:35.203 Warning Temperature Time: 0 minutes 00:15:35.203 Critical Temperature Time: 0 minutes 00:15:35.203 00:15:35.203 Number of Queues 00:15:35.203 ================ 00:15:35.203 Number of I/O Submission Queues: 64 00:15:35.203 Number of I/O Completion Queues: 64 00:15:35.203 00:15:35.203 ZNS Specific Controller Data 00:15:35.203 ============================ 00:15:35.203 Zone Append Size Limit: 0 00:15:35.203 00:15:35.203 00:15:35.203 Active Namespaces 00:15:35.203 ================= 00:15:35.203 Namespace ID:1 00:15:35.203 Error Recovery Timeout: Unlimited 00:15:35.203 Command Set Identifier: NVM (00h) 00:15:35.203 Deallocate: Supported 00:15:35.203 Deallocated/Unwritten Error: Supported 00:15:35.203 Deallocated Read Value: All 0x00 00:15:35.203 Deallocate in Write Zeroes: Not Supported 00:15:35.203 Deallocated Guard Field: 0xFFFF 00:15:35.203 Flush: Supported 00:15:35.203 Reservation: Not Supported 00:15:35.203 Metadata Transferred as: Separate Metadata Buffer 00:15:35.203 Namespace Sharing Capabilities: Private 00:15:35.203 Size (in LBAs): 1548666 (5GiB) 00:15:35.203 Capacity (in LBAs): 1548666 (5GiB) 00:15:35.203 Utilization (in LBAs): 1548666 (5GiB) 00:15:35.203 Thin Provisioning: Not Supported 00:15:35.203 Per-NS Atomic Units: No 00:15:35.203 Maximum Single Source Range Length: 128 00:15:35.203 Maximum Copy Length: 128 00:15:35.203 Maximum Source Range Count: 128 00:15:35.203 NGUID/EUI64 Never Reused: No 00:15:35.203 Namespace Write Protected: No 00:15:35.203 Number of LBA Formats: 8 00:15:35.203 Current LBA Format: LBA Format #07 00:15:35.203 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:35.203 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:35.203 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:35.203 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:35.203 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:35.203 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:35.203 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:35.203 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:35.203 00:15:35.203 NVM Specific Namespace Data 00:15:35.203 =========================== 00:15:35.203 Logical Block Storage Tag Mask: 0 00:15:35.203 Protection Information Capabilities: 00:15:35.203 16b Guard Protection Information Storage Tag Support: No 00:15:35.203 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:35.203 Storage Tag Check Read Support: No 00:15:35.203 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.203 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.203 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.203 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.203 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.203 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.203 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.203 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.203 ===================================================== 00:15:35.203 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:35.203 ===================================================== 00:15:35.203 Controller Capabilities/Features 00:15:35.203 ================================ 00:15:35.203 Vendor ID: 1b36 00:15:35.203 Subsystem Vendor ID: 1af4 00:15:35.203 Serial Number: 12341 00:15:35.203 Model Number: QEMU NVMe Ctrl 00:15:35.203 Firmware Version: 8.0.0 00:15:35.203 Recommended Arb Burst: 6 00:15:35.203 IEEE OUI Identifier: 00 54 52 00:15:35.203 Multi-path I/O 00:15:35.203 May have multiple subsystem ports: No 00:15:35.203 May have multiple controllers: No 00:15:35.203 Associated with SR-IOV VF: No 00:15:35.203 Max Data Transfer Size: 524288 00:15:35.203 Max Number of Namespaces: 256 00:15:35.203 Max Number of I/O Queues: 64 00:15:35.203 NVMe Specification Version (VS): 1.4 00:15:35.203 NVMe Specification Version (Identify): 1.4 00:15:35.203 Maximum Queue Entries: 2048 00:15:35.203 Contiguous Queues Required: Yes 00:15:35.203 Arbitration Mechanisms Supported 00:15:35.203 Weighted Round Robin: Not Supported 00:15:35.203 Vendor Specific: Not Supported 00:15:35.203 Reset Timeout: 7500 ms 00:15:35.203 Doorbell Stride: 4 bytes 00:15:35.203 NVM Subsystem Reset: Not Supported 00:15:35.203 Command Sets Supported 00:15:35.203 NVM Command Set: Supported 00:15:35.203 Boot Partition: Not Supported 00:15:35.203 Memory Page Size Minimum: 4096 bytes 00:15:35.203 Memory Page Size Maximum: 65536 bytes 00:15:35.203 Persistent Memory Region: Not Supported 00:15:35.203 Optional Asynchronous Events Supported 00:15:35.203 Namespace Attribute Notices: Supported 00:15:35.203 Firmware Activation Notices: Not Supported 00:15:35.203 ANA Change Notices: Not Supported 00:15:35.203 PLE Aggregate Log Change Notices: Not Supported 00:15:35.203 LBA Status Info Alert Notices: Not Supported 00:15:35.203 EGE Aggregate Log Change Notices: Not Supported 00:15:35.203 Normal NVM Subsystem Shutdown event: Not Supported 00:15:35.203 Zone Descriptor Change Notices: Not Supported 00:15:35.203 Discovery Log Change Notices: Not Supported 00:15:35.203 Controller Attributes 00:15:35.203 128-bit Host Identifier: Not Supported 00:15:35.203 Non-Operational Permissive Mode: Not Supported 00:15:35.203 NVM Sets: Not Supported 00:15:35.203 Read Recovery Levels: Not Supported 00:15:35.203 Endurance Groups: Not Supported 00:15:35.203 Predictable Latency Mode: Not Supported 00:15:35.203 Traffic Based Keep ALive: Not Supported 00:15:35.203 Namespace Granularity: Not Supported 00:15:35.203 SQ Associations: Not Supported 00:15:35.203 UUID List: Not Supported 00:15:35.203 Multi-Domain Subsystem: Not Supported 00:15:35.203 Fixed Capacity Management: Not Supported 00:15:35.203 Variable Capacity Management: Not Supported 00:15:35.203 Delete Endurance Group: Not Supported 00:15:35.203 Delete NVM Set: Not Supported 00:15:35.203 Extended LBA Formats Supported: Supported 00:15:35.203 Flexible Data Placement Supported: Not Supported 00:15:35.203 00:15:35.203 Controller Memory Buffer Support 00:15:35.203 ================================ 00:15:35.203 Supported: No 00:15:35.203 00:15:35.203 Persistent Memory Region Support 00:15:35.203 ================================ 00:15:35.203 Supported: No 00:15:35.203 00:15:35.203 Admin Command Set Attributes 00:15:35.203 ============================ 00:15:35.203 Security Send/Receive: Not Supported 00:15:35.203 Format NVM: Supported 00:15:35.203 Firmware Activate/Download: Not Supported 00:15:35.203 Namespace Management: Supported 00:15:35.203 Device Self-Test: Not Supported 00:15:35.203 Directives: Supported 00:15:35.203 NVMe-MI: Not Supported 00:15:35.203 Virtualization Management: Not Supported 00:15:35.203 Doorbell Buffer Config: Supported 00:15:35.203 Get LBA Status Capability: Not Supported 00:15:35.203 Command & Feature Lockdown Capability: Not Supported 00:15:35.203 Abort Command Limit: 4 00:15:35.203 Async Event Request Limit: 4 00:15:35.203 Number of Firmware Slots: N/A 00:15:35.203 Firmware Slot 1 Read-Only: N/A 00:15:35.203 Firmware Activation Without Reset: N/A 00:15:35.203 Multiple Update Detection Support: N/A 00:15:35.203 Firmware Update Granularity: No Information Provided 00:15:35.203 Per-Namespace SMART Log: Yes 00:15:35.203 Asymmetric Namespace Access Log Page: Not Supported 00:15:35.203 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:15:35.203 Command Effects Log Page: Supported 00:15:35.203 Get Log Page Extended Data: Supported 00:15:35.203 Telemetry Log Pages: Not Supported 00:15:35.203 Persistent Event Log Pages: Not Supported 00:15:35.203 Supported Log Pages Log Page: May Support 00:15:35.203 Commands Supported & Effects Log Page: Not Supported 00:15:35.203 Feature Identifiers & Effects Log Page:May Support 00:15:35.203 NVMe-MI Commands & Effects Log Page: May Support 00:15:35.203 Data Area 4 for Telemetry Log: Not Supported 00:15:35.203 Error Log Page Entries Supported: 1 00:15:35.203 Keep Alive: Not Supported 00:15:35.203 00:15:35.203 NVM Command Set Attributes 00:15:35.203 ========================== 00:15:35.203 Submission Queue Entry Size 00:15:35.203 Max: 64 00:15:35.204 Min: 64 00:15:35.204 Completion Queue Entry Size 00:15:35.204 Max: 16 00:15:35.204 Min: 16 00:15:35.204 Number of Namespaces: 256 00:15:35.204 Compare Command: Supported 00:15:35.204 Write Uncorrectable Command: Not Supported 00:15:35.204 Dataset Management Command: Supported 00:15:35.204 Write Zeroes Command: Supported 00:15:35.204 Set Features Save Field: Supported 00:15:35.204 Reservations: Not Supported 00:15:35.204 Timestamp: Supported 00:15:35.204 Copy: Supported 00:15:35.204 Volatile Write Cache: Present 00:15:35.204 Atomic Write Unit (Normal): 1 00:15:35.204 Atomic Write Unit (PFail): 1 00:15:35.204 Atomic Compare & Write Unit: 1 00:15:35.204 Fused Compare & Write: Not Supported 00:15:35.204 Scatter-Gather List 00:15:35.204 SGL Command Set: Supported 00:15:35.204 SGL Keyed: Not Supported 00:15:35.204 SGL Bit Bucket Descriptor: Not Supported 00:15:35.204 SGL Metadata Pointer: Not Supported 00:15:35.204 Oversized SGL: Not Supported 00:15:35.204 SGL Metadata Address: Not Supported 00:15:35.204 SGL Offset: Not Supported 00:15:35.204 Transport SGL Data Block: Not Supported 00:15:35.204 Replay Protected Memory Block: Not Supported 00:15:35.204 00:15:35.204 Firmware Slot Information 00:15:35.204 ========================= 00:15:35.204 Active slot: 1 00:15:35.204 Slot 1 Firmware Revision: 1.0 00:15:35.204 00:15:35.204 00:15:35.204 Commands Supported and Effects 00:15:35.204 ============================== 00:15:35.204 Admin Commands 00:15:35.204 -------------- 00:15:35.204 Delete I/O Submission Queue (00h): Supported 00:15:35.204 Create I/O Submission Queue (01h): Supported 00:15:35.204 Get Log Page (02h): Supported 00:15:35.204 Delete I/O Completion Queue (04h): Supported 00:15:35.204 Create I/O Completion Queue (05h): Supported 00:15:35.204 Identify (06h): Supported 00:15:35.204 Abort (08h): Supported 00:15:35.204 Set Features (09h): Supported 00:15:35.204 Get Features (0Ah): Supported 00:15:35.204 Asynchronous Event Request (0Ch): Supported 00:15:35.204 Namespace Attachment (15h): Supported NS-Inventory-Change 00:15:35.204 Directive Send (19h): Supported 00:15:35.204 Directive Receive (1Ah): Supported 00:15:35.204 Virtualization Management (1Ch): Supported 00:15:35.204 Doorbell Buffer Config (7Ch): Supported 00:15:35.204 Format NVM (80h): Supported LBA-Change 00:15:35.204 I/O Commands 00:15:35.204 ------------ 00:15:35.204 Flush (00h): Supported LBA-Change 00:15:35.204 Write (01h): Supported LBA-Change 00:15:35.204 Read (02h): Supported 00:15:35.204 Compare (05h): Supported 00:15:35.204 Write Zeroes (08h): Supported LBA-Change 00:15:35.204 Dataset Management (09h): Supported LBA-Change 00:15:35.204 Unknown (0Ch): Supported 00:15:35.204 Unknown (12h): Supported 00:15:35.204 Copy (19h): Supported LBA-Change 00:15:35.204 Unknown (1Dh): Supported LBA-Change 00:15:35.204 00:15:35.204 Error Log 00:15:35.204 ========= 00:15:35.204 00:15:35.204 Arbitration 00:15:35.204 =========== 00:15:35.204 Arbitration Burst: no limit 00:15:35.204 00:15:35.204 Power Management 00:15:35.204 ================ 00:15:35.204 Number of Power States: 1 00:15:35.204 Current Power State: Power State #0 00:15:35.204 Power State #0: 00:15:35.204 Max Power: 25.00 W 00:15:35.204 Non-Operational State: Operational 00:15:35.204 Entry Latency: 16 microseconds 00:15:35.204 Exit Latency: 4 microseconds 00:15:35.204 Relative Read Throughput: 0 00:15:35.204 Relative Read Latency: 0 00:15:35.204 Relative Write Throughput: 0 00:15:35.204 Relative Write Latency: 0 00:15:35.204 Idle Power: Not Reported 00:15:35.204 Active Power: Not Reported 00:15:35.204 Non-Operational Permissive Mode: Not Supported 00:15:35.204 00:15:35.204 Health Information 00:15:35.204 ================== 00:15:35.204 Critical Warnings: 00:15:35.204 Available Spare Space: OK 00:15:35.204 Temperature: [2024-07-24 17:16:21.242401] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 68179 terminated unexpected 00:15:35.204 [2024-07-24 17:16:21.243782] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 68179 terminated unexpected 00:15:35.204 [2024-07-24 17:16:21.244578] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 68179 terminated unexpected 00:15:35.204 OK 00:15:35.204 Device Reliability: OK 00:15:35.204 Read Only: No 00:15:35.204 Volatile Memory Backup: OK 00:15:35.204 Current Temperature: 323 Kelvin (50 Celsius) 00:15:35.204 Temperature Threshold: 343 Kelvin (70 Celsius) 00:15:35.204 Available Spare: 0% 00:15:35.204 Available Spare Threshold: 0% 00:15:35.204 Life Percentage Used: 0% 00:15:35.204 Data Units Read: 1160 00:15:35.204 Data Units Written: 944 00:15:35.204 Host Read Commands: 51994 00:15:35.204 Host Write Commands: 49083 00:15:35.204 Controller Busy Time: 0 minutes 00:15:35.204 Power Cycles: 0 00:15:35.204 Power On Hours: 0 hours 00:15:35.204 Unsafe Shutdowns: 0 00:15:35.204 Unrecoverable Media Errors: 0 00:15:35.204 Lifetime Error Log Entries: 0 00:15:35.204 Warning Temperature Time: 0 minutes 00:15:35.204 Critical Temperature Time: 0 minutes 00:15:35.204 00:15:35.204 Number of Queues 00:15:35.204 ================ 00:15:35.204 Number of I/O Submission Queues: 64 00:15:35.204 Number of I/O Completion Queues: 64 00:15:35.204 00:15:35.204 ZNS Specific Controller Data 00:15:35.204 ============================ 00:15:35.204 Zone Append Size Limit: 0 00:15:35.204 00:15:35.204 00:15:35.204 Active Namespaces 00:15:35.204 ================= 00:15:35.204 Namespace ID:1 00:15:35.204 Error Recovery Timeout: Unlimited 00:15:35.204 Command Set Identifier: NVM (00h) 00:15:35.204 Deallocate: Supported 00:15:35.204 Deallocated/Unwritten Error: Supported 00:15:35.204 Deallocated Read Value: All 0x00 00:15:35.204 Deallocate in Write Zeroes: Not Supported 00:15:35.204 Deallocated Guard Field: 0xFFFF 00:15:35.204 Flush: Supported 00:15:35.204 Reservation: Not Supported 00:15:35.204 Namespace Sharing Capabilities: Private 00:15:35.204 Size (in LBAs): 1310720 (5GiB) 00:15:35.204 Capacity (in LBAs): 1310720 (5GiB) 00:15:35.204 Utilization (in LBAs): 1310720 (5GiB) 00:15:35.204 Thin Provisioning: Not Supported 00:15:35.204 Per-NS Atomic Units: No 00:15:35.204 Maximum Single Source Range Length: 128 00:15:35.204 Maximum Copy Length: 128 00:15:35.204 Maximum Source Range Count: 128 00:15:35.204 NGUID/EUI64 Never Reused: No 00:15:35.204 Namespace Write Protected: No 00:15:35.204 Number of LBA Formats: 8 00:15:35.204 Current LBA Format: LBA Format #04 00:15:35.204 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:35.204 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:35.204 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:35.204 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:35.204 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:35.204 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:35.204 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:35.204 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:35.204 00:15:35.204 NVM Specific Namespace Data 00:15:35.204 =========================== 00:15:35.204 Logical Block Storage Tag Mask: 0 00:15:35.204 Protection Information Capabilities: 00:15:35.204 16b Guard Protection Information Storage Tag Support: No 00:15:35.204 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:35.204 Storage Tag Check Read Support: No 00:15:35.204 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.204 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.204 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.204 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.204 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.204 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.204 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.204 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.204 ===================================================== 00:15:35.204 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:35.204 ===================================================== 00:15:35.204 Controller Capabilities/Features 00:15:35.204 ================================ 00:15:35.204 Vendor ID: 1b36 00:15:35.204 Subsystem Vendor ID: 1af4 00:15:35.204 Serial Number: 12343 00:15:35.205 Model Number: QEMU NVMe Ctrl 00:15:35.205 Firmware Version: 8.0.0 00:15:35.205 Recommended Arb Burst: 6 00:15:35.205 IEEE OUI Identifier: 00 54 52 00:15:35.205 Multi-path I/O 00:15:35.205 May have multiple subsystem ports: No 00:15:35.205 May have multiple controllers: Yes 00:15:35.205 Associated with SR-IOV VF: No 00:15:35.205 Max Data Transfer Size: 524288 00:15:35.205 Max Number of Namespaces: 256 00:15:35.205 Max Number of I/O Queues: 64 00:15:35.205 NVMe Specification Version (VS): 1.4 00:15:35.205 NVMe Specification Version (Identify): 1.4 00:15:35.205 Maximum Queue Entries: 2048 00:15:35.205 Contiguous Queues Required: Yes 00:15:35.205 Arbitration Mechanisms Supported 00:15:35.205 Weighted Round Robin: Not Supported 00:15:35.205 Vendor Specific: Not Supported 00:15:35.205 Reset Timeout: 7500 ms 00:15:35.205 Doorbell Stride: 4 bytes 00:15:35.205 NVM Subsystem Reset: Not Supported 00:15:35.205 Command Sets Supported 00:15:35.205 NVM Command Set: Supported 00:15:35.205 Boot Partition: Not Supported 00:15:35.205 Memory Page Size Minimum: 4096 bytes 00:15:35.205 Memory Page Size Maximum: 65536 bytes 00:15:35.205 Persistent Memory Region: Not Supported 00:15:35.205 Optional Asynchronous Events Supported 00:15:35.205 Namespace Attribute Notices: Supported 00:15:35.205 Firmware Activation Notices: Not Supported 00:15:35.205 ANA Change Notices: Not Supported 00:15:35.205 PLE Aggregate Log Change Notices: Not Supported 00:15:35.205 LBA Status Info Alert Notices: Not Supported 00:15:35.205 EGE Aggregate Log Change Notices: Not Supported 00:15:35.205 Normal NVM Subsystem Shutdown event: Not Supported 00:15:35.205 Zone Descriptor Change Notices: Not Supported 00:15:35.205 Discovery Log Change Notices: Not Supported 00:15:35.205 Controller Attributes 00:15:35.205 128-bit Host Identifier: Not Supported 00:15:35.205 Non-Operational Permissive Mode: Not Supported 00:15:35.205 NVM Sets: Not Supported 00:15:35.205 Read Recovery Levels: Not Supported 00:15:35.205 Endurance Groups: Supported 00:15:35.205 Predictable Latency Mode: Not Supported 00:15:35.205 Traffic Based Keep ALive: Not Supported 00:15:35.205 Namespace Granularity: Not Supported 00:15:35.205 SQ Associations: Not Supported 00:15:35.205 UUID List: Not Supported 00:15:35.205 Multi-Domain Subsystem: Not Supported 00:15:35.205 Fixed Capacity Management: Not Supported 00:15:35.205 Variable Capacity Management: Not Supported 00:15:35.205 Delete Endurance Group: Not Supported 00:15:35.205 Delete NVM Set: Not Supported 00:15:35.205 Extended LBA Formats Supported: Supported 00:15:35.205 Flexible Data Placement Supported: Supported 00:15:35.205 00:15:35.205 Controller Memory Buffer Support 00:15:35.205 ================================ 00:15:35.205 Supported: No 00:15:35.205 00:15:35.205 Persistent Memory Region Support 00:15:35.205 ================================ 00:15:35.205 Supported: No 00:15:35.205 00:15:35.205 Admin Command Set Attributes 00:15:35.205 ============================ 00:15:35.205 Security Send/Receive: Not Supported 00:15:35.205 Format NVM: Supported 00:15:35.205 Firmware Activate/Download: Not Supported 00:15:35.205 Namespace Management: Supported 00:15:35.205 Device Self-Test: Not Supported 00:15:35.205 Directives: Supported 00:15:35.205 NVMe-MI: Not Supported 00:15:35.205 Virtualization Management: Not Supported 00:15:35.205 Doorbell Buffer Config: Supported 00:15:35.205 Get LBA Status Capability: Not Supported 00:15:35.205 Command & Feature Lockdown Capability: Not Supported 00:15:35.205 Abort Command Limit: 4 00:15:35.205 Async Event Request Limit: 4 00:15:35.205 Number of Firmware Slots: N/A 00:15:35.205 Firmware Slot 1 Read-Only: N/A 00:15:35.205 Firmware Activation Without Reset: N/A 00:15:35.205 Multiple Update Detection Support: N/A 00:15:35.205 Firmware Update Granularity: No Information Provided 00:15:35.205 Per-Namespace SMART Log: Yes 00:15:35.205 Asymmetric Namespace Access Log Page: Not Supported 00:15:35.205 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:15:35.205 Command Effects Log Page: Supported 00:15:35.205 Get Log Page Extended Data: Supported 00:15:35.205 Telemetry Log Pages: Not Supported 00:15:35.205 Persistent Event Log Pages: Not Supported 00:15:35.205 Supported Log Pages Log Page: May Support 00:15:35.205 Commands Supported & Effects Log Page: Not Supported 00:15:35.205 Feature Identifiers & Effects Log Page:May Support 00:15:35.205 NVMe-MI Commands & Effects Log Page: May Support 00:15:35.205 Data Area 4 for Telemetry Log: Not Supported 00:15:35.205 Error Log Page Entries Supported: 1 00:15:35.205 Keep Alive: Not Supported 00:15:35.205 00:15:35.205 NVM Command Set Attributes 00:15:35.205 ========================== 00:15:35.205 Submission Queue Entry Size 00:15:35.205 Max: 64 00:15:35.205 Min: 64 00:15:35.205 Completion Queue Entry Size 00:15:35.205 Max: 16 00:15:35.205 Min: 16 00:15:35.205 Number of Namespaces: 256 00:15:35.205 Compare Command: Supported 00:15:35.205 Write Uncorrectable Command: Not Supported 00:15:35.205 Dataset Management Command: Supported 00:15:35.205 Write Zeroes Command: Supported 00:15:35.205 Set Features Save Field: Supported 00:15:35.205 Reservations: Not Supported 00:15:35.205 Timestamp: Supported 00:15:35.205 Copy: Supported 00:15:35.205 Volatile Write Cache: Present 00:15:35.205 Atomic Write Unit (Normal): 1 00:15:35.205 Atomic Write Unit (PFail): 1 00:15:35.205 Atomic Compare & Write Unit: 1 00:15:35.205 Fused Compare & Write: Not Supported 00:15:35.205 Scatter-Gather List 00:15:35.205 SGL Command Set: Supported 00:15:35.205 SGL Keyed: Not Supported 00:15:35.205 SGL Bit Bucket Descriptor: Not Supported 00:15:35.205 SGL Metadata Pointer: Not Supported 00:15:35.205 Oversized SGL: Not Supported 00:15:35.205 SGL Metadata Address: Not Supported 00:15:35.205 SGL Offset: Not Supported 00:15:35.205 Transport SGL Data Block: Not Supported 00:15:35.205 Replay Protected Memory Block: Not Supported 00:15:35.205 00:15:35.205 Firmware Slot Information 00:15:35.205 ========================= 00:15:35.205 Active slot: 1 00:15:35.205 Slot 1 Firmware Revision: 1.0 00:15:35.205 00:15:35.205 00:15:35.205 Commands Supported and Effects 00:15:35.205 ============================== 00:15:35.205 Admin Commands 00:15:35.205 -------------- 00:15:35.205 Delete I/O Submission Queue (00h): Supported 00:15:35.205 Create I/O Submission Queue (01h): Supported 00:15:35.205 Get Log Page (02h): Supported 00:15:35.205 Delete I/O Completion Queue (04h): Supported 00:15:35.205 Create I/O Completion Queue (05h): Supported 00:15:35.205 Identify (06h): Supported 00:15:35.205 Abort (08h): Supported 00:15:35.205 Set Features (09h): Supported 00:15:35.205 Get Features (0Ah): Supported 00:15:35.205 Asynchronous Event Request (0Ch): Supported 00:15:35.205 Namespace Attachment (15h): Supported NS-Inventory-Change 00:15:35.205 Directive Send (19h): Supported 00:15:35.205 Directive Receive (1Ah): Supported 00:15:35.205 Virtualization Management (1Ch): Supported 00:15:35.205 Doorbell Buffer Config (7Ch): Supported 00:15:35.205 Format NVM (80h): Supported LBA-Change 00:15:35.205 I/O Commands 00:15:35.205 ------------ 00:15:35.205 Flush (00h): Supported LBA-Change 00:15:35.205 Write (01h): Supported LBA-Change 00:15:35.205 Read (02h): Supported 00:15:35.205 Compare (05h): Supported 00:15:35.205 Write Zeroes (08h): Supported LBA-Change 00:15:35.205 Dataset Management (09h): Supported LBA-Change 00:15:35.205 Unknown (0Ch): Supported 00:15:35.205 Unknown (12h): Supported 00:15:35.205 Copy (19h): Supported LBA-Change 00:15:35.205 Unknown (1Dh): Supported LBA-Change 00:15:35.205 00:15:35.205 Error Log 00:15:35.205 ========= 00:15:35.205 00:15:35.205 Arbitration 00:15:35.205 =========== 00:15:35.205 Arbitration Burst: no limit 00:15:35.205 00:15:35.205 Power Management 00:15:35.205 ================ 00:15:35.205 Number of Power States: 1 00:15:35.205 Current Power State: Power State #0 00:15:35.205 Power State #0: 00:15:35.205 Max Power: 25.00 W 00:15:35.205 Non-Operational State: Operational 00:15:35.205 Entry Latency: 16 microseconds 00:15:35.205 Exit Latency: 4 microseconds 00:15:35.205 Relative Read Throughput: 0 00:15:35.205 Relative Read Latency: 0 00:15:35.206 Relative Write Throughput: 0 00:15:35.206 Relative Write Latency: 0 00:15:35.206 Idle Power: Not Reported 00:15:35.206 Active Power: Not Reported 00:15:35.206 Non-Operational Permissive Mode: Not Supported 00:15:35.206 00:15:35.206 Health Information 00:15:35.206 ================== 00:15:35.206 Critical Warnings: 00:15:35.206 Available Spare Space: OK 00:15:35.206 Temperature: OK 00:15:35.206 Device Reliability: OK 00:15:35.206 Read Only: No 00:15:35.206 Volatile Memory Backup: OK 00:15:35.206 Current Temperature: 323 Kelvin (50 Celsius) 00:15:35.206 Temperature Threshold: 343 Kelvin (70 Celsius) 00:15:35.206 Available Spare: 0% 00:15:35.206 Available Spare Threshold: 0% 00:15:35.206 Life Percentage Used: 0% 00:15:35.206 Data Units Read: 814 00:15:35.206 Data Units Written: 708 00:15:35.206 Host Read Commands: 35492 00:15:35.206 Host Write Commands: 34082 00:15:35.206 Controller Busy Time: 0 minutes 00:15:35.206 Power Cycles: 0 00:15:35.206 Power On Hours: 0 hours 00:15:35.206 Unsafe Shutdowns: 0 00:15:35.206 Unrecoverable Media Errors: 0 00:15:35.206 Lifetime Error Log Entries: 0 00:15:35.206 Warning Temperature Time: 0 minutes 00:15:35.206 Critical Temperature Time: 0 minutes 00:15:35.206 00:15:35.206 Number of Queues 00:15:35.206 ================ 00:15:35.206 Number of I/O Submission Queues: 64 00:15:35.206 Number of I/O Completion Queues: 64 00:15:35.206 00:15:35.206 ZNS Specific Controller Data 00:15:35.206 ============================ 00:15:35.206 Zone Append Size Limit: 0 00:15:35.206 00:15:35.206 00:15:35.206 Active Namespaces 00:15:35.206 ================= 00:15:35.206 Namespace ID:1 00:15:35.206 Error Recovery Timeout: Unlimited 00:15:35.206 Command Set Identifier: NVM (00h) 00:15:35.206 Deallocate: Supported 00:15:35.206 Deallocated/Unwritten Error: Supported 00:15:35.206 Deallocated Read Value: All 0x00 00:15:35.206 Deallocate in Write Zeroes: Not Supported 00:15:35.206 Deallocated Guard Field: 0xFFFF 00:15:35.206 Flush: Supported 00:15:35.206 Reservation: Not Supported 00:15:35.206 Namespace Sharing Capabilities: Multiple Controllers 00:15:35.206 Size (in LBAs): 262144 (1GiB) 00:15:35.206 Capacity (in LBAs): 262144 (1GiB) 00:15:35.206 Utilization (in LBAs): 262144 (1GiB) 00:15:35.206 Thin Provisioning: Not Supported 00:15:35.206 Per-NS Atomic Units: No 00:15:35.206 Maximum Single Source Range Length: 128 00:15:35.206 Maximum Copy Length: 128 00:15:35.206 Maximum Source Range Count: 128 00:15:35.206 NGUID/EUI64 Never Reused: No 00:15:35.206 Namespace Write Protected: No 00:15:35.206 Endurance group ID: 1 00:15:35.206 Number of LBA Formats: 8 00:15:35.206 Current LBA Format: LBA Format #04 00:15:35.206 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:35.206 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:35.206 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:35.206 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:35.206 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:35.206 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:35.206 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:35.206 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:35.206 00:15:35.206 Get Feature FDP: 00:15:35.206 ================ 00:15:35.206 Enabled: Yes 00:15:35.206 FDP configuration index: 0 00:15:35.206 00:15:35.206 FDP configurations log page 00:15:35.206 =========================== 00:15:35.206 Number of FDP configurations: 1 00:15:35.206 Version: 0 00:15:35.206 Size: 112 00:15:35.206 FDP Configuration Descriptor: 0 00:15:35.206 Descriptor Size: 96 00:15:35.206 Reclaim Group Identifier format: 2 00:15:35.206 FDP Volatile Write Cache: Not Present 00:15:35.206 FDP Configuration: Valid 00:15:35.206 Vendor Specific Size: 0 00:15:35.206 Number of Reclaim Groups: 2 00:15:35.206 Number of Recalim Unit Handles: 8 00:15:35.206 Max Placement Identifiers: 128 00:15:35.206 Number of Namespaces Suppprted: 256 00:15:35.206 Reclaim unit Nominal Size: 6000000 bytes 00:15:35.206 Estimated Reclaim Unit Time Limit: Not Reported 00:15:35.206 RUH Desc #000: RUH Type: Initially Isolated 00:15:35.206 RUH Desc #001: RUH Type: Initially Isolated 00:15:35.206 RUH Desc #002: RUH Type: Initially Isolated 00:15:35.206 RUH Desc #003: RUH Type: Initially Isolated 00:15:35.206 RUH Desc #004: RUH Type: Initially Isolated 00:15:35.206 RUH Desc #005: RUH Type: Initially Isolated 00:15:35.206 RUH Desc #006: RUH Type: Initially Isolated 00:15:35.206 RUH Desc #007: RUH Type: Initially Isolated 00:15:35.206 00:15:35.206 FDP reclaim unit handle usage log page 00:15:35.206 ====================================== 00:15:35.206 Number of Reclaim Unit Handles: 8 00:15:35.206 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:15:35.206 RUH Usage Desc #001: RUH Attributes: Unused 00:15:35.206 RUH Usage Desc #002: RUH Attributes: Unused 00:15:35.206 RUH Usage Desc #003: RUH Attributes: Unused 00:15:35.206 RUH Usage Desc #004: RUH Attributes: Unused 00:15:35.206 RUH Usage Desc #005: RUH Attributes: Unused 00:15:35.206 RUH Usage Desc #006: RUH Attributes: Unused 00:15:35.206 RUH Usage Desc #007: RUH Attributes: Unused 00:15:35.206 00:15:35.206 FDP statistics log page 00:15:35.206 ======================= 00:15:35.206 Host bytes with metadata written: 447717376 00:15:35.206 Medi[2024-07-24 17:16:21.247732] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 68179 terminated unexpected 00:15:35.206 a bytes with metadata written: 447782912 00:15:35.206 Media bytes erased: 0 00:15:35.206 00:15:35.206 FDP events log page 00:15:35.206 =================== 00:15:35.206 Number of FDP events: 0 00:15:35.206 00:15:35.206 NVM Specific Namespace Data 00:15:35.206 =========================== 00:15:35.206 Logical Block Storage Tag Mask: 0 00:15:35.206 Protection Information Capabilities: 00:15:35.206 16b Guard Protection Information Storage Tag Support: No 00:15:35.206 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:35.206 Storage Tag Check Read Support: No 00:15:35.206 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.206 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.206 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.206 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.206 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.206 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.206 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.206 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.206 ===================================================== 00:15:35.206 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:35.206 ===================================================== 00:15:35.206 Controller Capabilities/Features 00:15:35.206 ================================ 00:15:35.206 Vendor ID: 1b36 00:15:35.206 Subsystem Vendor ID: 1af4 00:15:35.206 Serial Number: 12342 00:15:35.206 Model Number: QEMU NVMe Ctrl 00:15:35.206 Firmware Version: 8.0.0 00:15:35.206 Recommended Arb Burst: 6 00:15:35.206 IEEE OUI Identifier: 00 54 52 00:15:35.206 Multi-path I/O 00:15:35.206 May have multiple subsystem ports: No 00:15:35.206 May have multiple controllers: No 00:15:35.206 Associated with SR-IOV VF: No 00:15:35.206 Max Data Transfer Size: 524288 00:15:35.206 Max Number of Namespaces: 256 00:15:35.206 Max Number of I/O Queues: 64 00:15:35.206 NVMe Specification Version (VS): 1.4 00:15:35.206 NVMe Specification Version (Identify): 1.4 00:15:35.206 Maximum Queue Entries: 2048 00:15:35.206 Contiguous Queues Required: Yes 00:15:35.206 Arbitration Mechanisms Supported 00:15:35.206 Weighted Round Robin: Not Supported 00:15:35.206 Vendor Specific: Not Supported 00:15:35.206 Reset Timeout: 7500 ms 00:15:35.206 Doorbell Stride: 4 bytes 00:15:35.207 NVM Subsystem Reset: Not Supported 00:15:35.207 Command Sets Supported 00:15:35.207 NVM Command Set: Supported 00:15:35.207 Boot Partition: Not Supported 00:15:35.207 Memory Page Size Minimum: 4096 bytes 00:15:35.207 Memory Page Size Maximum: 65536 bytes 00:15:35.207 Persistent Memory Region: Not Supported 00:15:35.207 Optional Asynchronous Events Supported 00:15:35.207 Namespace Attribute Notices: Supported 00:15:35.207 Firmware Activation Notices: Not Supported 00:15:35.207 ANA Change Notices: Not Supported 00:15:35.207 PLE Aggregate Log Change Notices: Not Supported 00:15:35.207 LBA Status Info Alert Notices: Not Supported 00:15:35.207 EGE Aggregate Log Change Notices: Not Supported 00:15:35.207 Normal NVM Subsystem Shutdown event: Not Supported 00:15:35.207 Zone Descriptor Change Notices: Not Supported 00:15:35.207 Discovery Log Change Notices: Not Supported 00:15:35.207 Controller Attributes 00:15:35.207 128-bit Host Identifier: Not Supported 00:15:35.207 Non-Operational Permissive Mode: Not Supported 00:15:35.207 NVM Sets: Not Supported 00:15:35.207 Read Recovery Levels: Not Supported 00:15:35.207 Endurance Groups: Not Supported 00:15:35.207 Predictable Latency Mode: Not Supported 00:15:35.207 Traffic Based Keep ALive: Not Supported 00:15:35.207 Namespace Granularity: Not Supported 00:15:35.207 SQ Associations: Not Supported 00:15:35.207 UUID List: Not Supported 00:15:35.207 Multi-Domain Subsystem: Not Supported 00:15:35.207 Fixed Capacity Management: Not Supported 00:15:35.207 Variable Capacity Management: Not Supported 00:15:35.207 Delete Endurance Group: Not Supported 00:15:35.207 Delete NVM Set: Not Supported 00:15:35.207 Extended LBA Formats Supported: Supported 00:15:35.207 Flexible Data Placement Supported: Not Supported 00:15:35.207 00:15:35.207 Controller Memory Buffer Support 00:15:35.207 ================================ 00:15:35.207 Supported: No 00:15:35.207 00:15:35.207 Persistent Memory Region Support 00:15:35.207 ================================ 00:15:35.207 Supported: No 00:15:35.207 00:15:35.207 Admin Command Set Attributes 00:15:35.207 ============================ 00:15:35.207 Security Send/Receive: Not Supported 00:15:35.207 Format NVM: Supported 00:15:35.207 Firmware Activate/Download: Not Supported 00:15:35.207 Namespace Management: Supported 00:15:35.207 Device Self-Test: Not Supported 00:15:35.207 Directives: Supported 00:15:35.207 NVMe-MI: Not Supported 00:15:35.207 Virtualization Management: Not Supported 00:15:35.207 Doorbell Buffer Config: Supported 00:15:35.207 Get LBA Status Capability: Not Supported 00:15:35.207 Command & Feature Lockdown Capability: Not Supported 00:15:35.207 Abort Command Limit: 4 00:15:35.207 Async Event Request Limit: 4 00:15:35.207 Number of Firmware Slots: N/A 00:15:35.207 Firmware Slot 1 Read-Only: N/A 00:15:35.207 Firmware Activation Without Reset: N/A 00:15:35.207 Multiple Update Detection Support: N/A 00:15:35.207 Firmware Update Granularity: No Information Provided 00:15:35.207 Per-Namespace SMART Log: Yes 00:15:35.207 Asymmetric Namespace Access Log Page: Not Supported 00:15:35.207 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:15:35.207 Command Effects Log Page: Supported 00:15:35.207 Get Log Page Extended Data: Supported 00:15:35.207 Telemetry Log Pages: Not Supported 00:15:35.207 Persistent Event Log Pages: Not Supported 00:15:35.207 Supported Log Pages Log Page: May Support 00:15:35.207 Commands Supported & Effects Log Page: Not Supported 00:15:35.207 Feature Identifiers & Effects Log Page:May Support 00:15:35.207 NVMe-MI Commands & Effects Log Page: May Support 00:15:35.207 Data Area 4 for Telemetry Log: Not Supported 00:15:35.207 Error Log Page Entries Supported: 1 00:15:35.207 Keep Alive: Not Supported 00:15:35.207 00:15:35.207 NVM Command Set Attributes 00:15:35.207 ========================== 00:15:35.207 Submission Queue Entry Size 00:15:35.207 Max: 64 00:15:35.207 Min: 64 00:15:35.207 Completion Queue Entry Size 00:15:35.207 Max: 16 00:15:35.207 Min: 16 00:15:35.207 Number of Namespaces: 256 00:15:35.207 Compare Command: Supported 00:15:35.207 Write Uncorrectable Command: Not Supported 00:15:35.207 Dataset Management Command: Supported 00:15:35.207 Write Zeroes Command: Supported 00:15:35.207 Set Features Save Field: Supported 00:15:35.207 Reservations: Not Supported 00:15:35.207 Timestamp: Supported 00:15:35.207 Copy: Supported 00:15:35.207 Volatile Write Cache: Present 00:15:35.207 Atomic Write Unit (Normal): 1 00:15:35.207 Atomic Write Unit (PFail): 1 00:15:35.207 Atomic Compare & Write Unit: 1 00:15:35.207 Fused Compare & Write: Not Supported 00:15:35.207 Scatter-Gather List 00:15:35.207 SGL Command Set: Supported 00:15:35.207 SGL Keyed: Not Supported 00:15:35.207 SGL Bit Bucket Descriptor: Not Supported 00:15:35.207 SGL Metadata Pointer: Not Supported 00:15:35.207 Oversized SGL: Not Supported 00:15:35.207 SGL Metadata Address: Not Supported 00:15:35.207 SGL Offset: Not Supported 00:15:35.207 Transport SGL Data Block: Not Supported 00:15:35.207 Replay Protected Memory Block: Not Supported 00:15:35.207 00:15:35.207 Firmware Slot Information 00:15:35.207 ========================= 00:15:35.207 Active slot: 1 00:15:35.207 Slot 1 Firmware Revision: 1.0 00:15:35.207 00:15:35.207 00:15:35.207 Commands Supported and Effects 00:15:35.207 ============================== 00:15:35.207 Admin Commands 00:15:35.207 -------------- 00:15:35.207 Delete I/O Submission Queue (00h): Supported 00:15:35.207 Create I/O Submission Queue (01h): Supported 00:15:35.207 Get Log Page (02h): Supported 00:15:35.207 Delete I/O Completion Queue (04h): Supported 00:15:35.207 Create I/O Completion Queue (05h): Supported 00:15:35.207 Identify (06h): Supported 00:15:35.207 Abort (08h): Supported 00:15:35.207 Set Features (09h): Supported 00:15:35.207 Get Features (0Ah): Supported 00:15:35.207 Asynchronous Event Request (0Ch): Supported 00:15:35.207 Namespace Attachment (15h): Supported NS-Inventory-Change 00:15:35.207 Directive Send (19h): Supported 00:15:35.207 Directive Receive (1Ah): Supported 00:15:35.207 Virtualization Management (1Ch): Supported 00:15:35.207 Doorbell Buffer Config (7Ch): Supported 00:15:35.207 Format NVM (80h): Supported LBA-Change 00:15:35.207 I/O Commands 00:15:35.207 ------------ 00:15:35.207 Flush (00h): Supported LBA-Change 00:15:35.207 Write (01h): Supported LBA-Change 00:15:35.207 Read (02h): Supported 00:15:35.207 Compare (05h): Supported 00:15:35.207 Write Zeroes (08h): Supported LBA-Change 00:15:35.207 Dataset Management (09h): Supported LBA-Change 00:15:35.207 Unknown (0Ch): Supported 00:15:35.207 Unknown (12h): Supported 00:15:35.207 Copy (19h): Supported LBA-Change 00:15:35.207 Unknown (1Dh): Supported LBA-Change 00:15:35.207 00:15:35.207 Error Log 00:15:35.207 ========= 00:15:35.207 00:15:35.207 Arbitration 00:15:35.207 =========== 00:15:35.207 Arbitration Burst: no limit 00:15:35.207 00:15:35.207 Power Management 00:15:35.207 ================ 00:15:35.207 Number of Power States: 1 00:15:35.207 Current Power State: Power State #0 00:15:35.207 Power State #0: 00:15:35.207 Max Power: 25.00 W 00:15:35.207 Non-Operational State: Operational 00:15:35.207 Entry Latency: 16 microseconds 00:15:35.207 Exit Latency: 4 microseconds 00:15:35.207 Relative Read Throughput: 0 00:15:35.207 Relative Read Latency: 0 00:15:35.207 Relative Write Throughput: 0 00:15:35.207 Relative Write Latency: 0 00:15:35.207 Idle Power: Not Reported 00:15:35.207 Active Power: Not Reported 00:15:35.207 Non-Operational Permissive Mode: Not Supported 00:15:35.207 00:15:35.207 Health Information 00:15:35.207 ================== 00:15:35.207 Critical Warnings: 00:15:35.207 Available Spare Space: OK 00:15:35.207 Temperature: OK 00:15:35.207 Device Reliability: OK 00:15:35.207 Read Only: No 00:15:35.207 Volatile Memory Backup: OK 00:15:35.207 Current Temperature: 323 Kelvin (50 Celsius) 00:15:35.207 Temperature Threshold: 343 Kelvin (70 Celsius) 00:15:35.207 Available Spare: 0% 00:15:35.208 Available Spare Threshold: 0% 00:15:35.208 Life Percentage Used: 0% 00:15:35.208 Data Units Read: 2281 00:15:35.208 Data Units Written: 1961 00:15:35.208 Host Read Commands: 104622 00:15:35.208 Host Write Commands: 100392 00:15:35.208 Controller Busy Time: 0 minutes 00:15:35.208 Power Cycles: 0 00:15:35.208 Power On Hours: 0 hours 00:15:35.208 Unsafe Shutdowns: 0 00:15:35.208 Unrecoverable Media Errors: 0 00:15:35.208 Lifetime Error Log Entries: 0 00:15:35.208 Warning Temperature Time: 0 minutes 00:15:35.208 Critical Temperature Time: 0 minutes 00:15:35.208 00:15:35.208 Number of Queues 00:15:35.208 ================ 00:15:35.208 Number of I/O Submission Queues: 64 00:15:35.208 Number of I/O Completion Queues: 64 00:15:35.208 00:15:35.208 ZNS Specific Controller Data 00:15:35.208 ============================ 00:15:35.208 Zone Append Size Limit: 0 00:15:35.208 00:15:35.208 00:15:35.208 Active Namespaces 00:15:35.208 ================= 00:15:35.208 Namespace ID:1 00:15:35.208 Error Recovery Timeout: Unlimited 00:15:35.208 Command Set Identifier: NVM (00h) 00:15:35.208 Deallocate: Supported 00:15:35.208 Deallocated/Unwritten Error: Supported 00:15:35.208 Deallocated Read Value: All 0x00 00:15:35.208 Deallocate in Write Zeroes: Not Supported 00:15:35.208 Deallocated Guard Field: 0xFFFF 00:15:35.208 Flush: Supported 00:15:35.208 Reservation: Not Supported 00:15:35.208 Namespace Sharing Capabilities: Private 00:15:35.208 Size (in LBAs): 1048576 (4GiB) 00:15:35.208 Capacity (in LBAs): 1048576 (4GiB) 00:15:35.208 Utilization (in LBAs): 1048576 (4GiB) 00:15:35.208 Thin Provisioning: Not Supported 00:15:35.208 Per-NS Atomic Units: No 00:15:35.208 Maximum Single Source Range Length: 128 00:15:35.208 Maximum Copy Length: 128 00:15:35.208 Maximum Source Range Count: 128 00:15:35.208 NGUID/EUI64 Never Reused: No 00:15:35.208 Namespace Write Protected: No 00:15:35.208 Number of LBA Formats: 8 00:15:35.208 Current LBA Format: LBA Format #04 00:15:35.208 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:35.208 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:35.208 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:35.208 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:35.208 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:35.208 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:35.208 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:35.208 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:35.208 00:15:35.208 NVM Specific Namespace Data 00:15:35.208 =========================== 00:15:35.208 Logical Block Storage Tag Mask: 0 00:15:35.208 Protection Information Capabilities: 00:15:35.208 16b Guard Protection Information Storage Tag Support: No 00:15:35.208 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:35.208 Storage Tag Check Read Support: No 00:15:35.208 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Namespace ID:2 00:15:35.208 Error Recovery Timeout: Unlimited 00:15:35.208 Command Set Identifier: NVM (00h) 00:15:35.208 Deallocate: Supported 00:15:35.208 Deallocated/Unwritten Error: Supported 00:15:35.208 Deallocated Read Value: All 0x00 00:15:35.208 Deallocate in Write Zeroes: Not Supported 00:15:35.208 Deallocated Guard Field: 0xFFFF 00:15:35.208 Flush: Supported 00:15:35.208 Reservation: Not Supported 00:15:35.208 Namespace Sharing Capabilities: Private 00:15:35.208 Size (in LBAs): 1048576 (4GiB) 00:15:35.208 Capacity (in LBAs): 1048576 (4GiB) 00:15:35.208 Utilization (in LBAs): 1048576 (4GiB) 00:15:35.208 Thin Provisioning: Not Supported 00:15:35.208 Per-NS Atomic Units: No 00:15:35.208 Maximum Single Source Range Length: 128 00:15:35.208 Maximum Copy Length: 128 00:15:35.208 Maximum Source Range Count: 128 00:15:35.208 NGUID/EUI64 Never Reused: No 00:15:35.208 Namespace Write Protected: No 00:15:35.208 Number of LBA Formats: 8 00:15:35.208 Current LBA Format: LBA Format #04 00:15:35.208 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:35.208 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:35.208 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:35.208 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:35.208 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:35.208 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:35.208 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:35.208 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:35.208 00:15:35.208 NVM Specific Namespace Data 00:15:35.208 =========================== 00:15:35.208 Logical Block Storage Tag Mask: 0 00:15:35.208 Protection Information Capabilities: 00:15:35.208 16b Guard Protection Information Storage Tag Support: No 00:15:35.208 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:35.208 Storage Tag Check Read Support: No 00:15:35.208 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Namespace ID:3 00:15:35.208 Error Recovery Timeout: Unlimited 00:15:35.208 Command Set Identifier: NVM (00h) 00:15:35.208 Deallocate: Supported 00:15:35.208 Deallocated/Unwritten Error: Supported 00:15:35.208 Deallocated Read Value: All 0x00 00:15:35.208 Deallocate in Write Zeroes: Not Supported 00:15:35.208 Deallocated Guard Field: 0xFFFF 00:15:35.208 Flush: Supported 00:15:35.208 Reservation: Not Supported 00:15:35.208 Namespace Sharing Capabilities: Private 00:15:35.208 Size (in LBAs): 1048576 (4GiB) 00:15:35.208 Capacity (in LBAs): 1048576 (4GiB) 00:15:35.208 Utilization (in LBAs): 1048576 (4GiB) 00:15:35.208 Thin Provisioning: Not Supported 00:15:35.208 Per-NS Atomic Units: No 00:15:35.208 Maximum Single Source Range Length: 128 00:15:35.208 Maximum Copy Length: 128 00:15:35.208 Maximum Source Range Count: 128 00:15:35.208 NGUID/EUI64 Never Reused: No 00:15:35.208 Namespace Write Protected: No 00:15:35.208 Number of LBA Formats: 8 00:15:35.208 Current LBA Format: LBA Format #04 00:15:35.208 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:35.208 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:35.208 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:35.208 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:35.208 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:35.208 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:35.208 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:35.208 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:35.208 00:15:35.208 NVM Specific Namespace Data 00:15:35.208 =========================== 00:15:35.208 Logical Block Storage Tag Mask: 0 00:15:35.208 Protection Information Capabilities: 00:15:35.208 16b Guard Protection Information Storage Tag Support: No 00:15:35.208 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:35.208 Storage Tag Check Read Support: No 00:15:35.208 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.208 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.209 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.209 17:16:21 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:15:35.209 17:16:21 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:15:35.467 ===================================================== 00:15:35.467 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:35.467 ===================================================== 00:15:35.467 Controller Capabilities/Features 00:15:35.467 ================================ 00:15:35.467 Vendor ID: 1b36 00:15:35.467 Subsystem Vendor ID: 1af4 00:15:35.467 Serial Number: 12340 00:15:35.467 Model Number: QEMU NVMe Ctrl 00:15:35.467 Firmware Version: 8.0.0 00:15:35.467 Recommended Arb Burst: 6 00:15:35.467 IEEE OUI Identifier: 00 54 52 00:15:35.467 Multi-path I/O 00:15:35.467 May have multiple subsystem ports: No 00:15:35.467 May have multiple controllers: No 00:15:35.467 Associated with SR-IOV VF: No 00:15:35.467 Max Data Transfer Size: 524288 00:15:35.467 Max Number of Namespaces: 256 00:15:35.467 Max Number of I/O Queues: 64 00:15:35.467 NVMe Specification Version (VS): 1.4 00:15:35.467 NVMe Specification Version (Identify): 1.4 00:15:35.467 Maximum Queue Entries: 2048 00:15:35.467 Contiguous Queues Required: Yes 00:15:35.467 Arbitration Mechanisms Supported 00:15:35.467 Weighted Round Robin: Not Supported 00:15:35.467 Vendor Specific: Not Supported 00:15:35.467 Reset Timeout: 7500 ms 00:15:35.467 Doorbell Stride: 4 bytes 00:15:35.467 NVM Subsystem Reset: Not Supported 00:15:35.467 Command Sets Supported 00:15:35.467 NVM Command Set: Supported 00:15:35.467 Boot Partition: Not Supported 00:15:35.467 Memory Page Size Minimum: 4096 bytes 00:15:35.467 Memory Page Size Maximum: 65536 bytes 00:15:35.467 Persistent Memory Region: Not Supported 00:15:35.467 Optional Asynchronous Events Supported 00:15:35.467 Namespace Attribute Notices: Supported 00:15:35.467 Firmware Activation Notices: Not Supported 00:15:35.467 ANA Change Notices: Not Supported 00:15:35.467 PLE Aggregate Log Change Notices: Not Supported 00:15:35.467 LBA Status Info Alert Notices: Not Supported 00:15:35.467 EGE Aggregate Log Change Notices: Not Supported 00:15:35.467 Normal NVM Subsystem Shutdown event: Not Supported 00:15:35.467 Zone Descriptor Change Notices: Not Supported 00:15:35.467 Discovery Log Change Notices: Not Supported 00:15:35.467 Controller Attributes 00:15:35.467 128-bit Host Identifier: Not Supported 00:15:35.467 Non-Operational Permissive Mode: Not Supported 00:15:35.467 NVM Sets: Not Supported 00:15:35.467 Read Recovery Levels: Not Supported 00:15:35.467 Endurance Groups: Not Supported 00:15:35.467 Predictable Latency Mode: Not Supported 00:15:35.468 Traffic Based Keep ALive: Not Supported 00:15:35.468 Namespace Granularity: Not Supported 00:15:35.468 SQ Associations: Not Supported 00:15:35.468 UUID List: Not Supported 00:15:35.468 Multi-Domain Subsystem: Not Supported 00:15:35.468 Fixed Capacity Management: Not Supported 00:15:35.468 Variable Capacity Management: Not Supported 00:15:35.468 Delete Endurance Group: Not Supported 00:15:35.468 Delete NVM Set: Not Supported 00:15:35.468 Extended LBA Formats Supported: Supported 00:15:35.468 Flexible Data Placement Supported: Not Supported 00:15:35.468 00:15:35.468 Controller Memory Buffer Support 00:15:35.468 ================================ 00:15:35.468 Supported: No 00:15:35.468 00:15:35.468 Persistent Memory Region Support 00:15:35.468 ================================ 00:15:35.468 Supported: No 00:15:35.468 00:15:35.468 Admin Command Set Attributes 00:15:35.468 ============================ 00:15:35.468 Security Send/Receive: Not Supported 00:15:35.468 Format NVM: Supported 00:15:35.468 Firmware Activate/Download: Not Supported 00:15:35.468 Namespace Management: Supported 00:15:35.468 Device Self-Test: Not Supported 00:15:35.468 Directives: Supported 00:15:35.468 NVMe-MI: Not Supported 00:15:35.468 Virtualization Management: Not Supported 00:15:35.468 Doorbell Buffer Config: Supported 00:15:35.468 Get LBA Status Capability: Not Supported 00:15:35.468 Command & Feature Lockdown Capability: Not Supported 00:15:35.468 Abort Command Limit: 4 00:15:35.468 Async Event Request Limit: 4 00:15:35.468 Number of Firmware Slots: N/A 00:15:35.468 Firmware Slot 1 Read-Only: N/A 00:15:35.468 Firmware Activation Without Reset: N/A 00:15:35.468 Multiple Update Detection Support: N/A 00:15:35.468 Firmware Update Granularity: No Information Provided 00:15:35.468 Per-Namespace SMART Log: Yes 00:15:35.468 Asymmetric Namespace Access Log Page: Not Supported 00:15:35.468 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:15:35.468 Command Effects Log Page: Supported 00:15:35.468 Get Log Page Extended Data: Supported 00:15:35.468 Telemetry Log Pages: Not Supported 00:15:35.468 Persistent Event Log Pages: Not Supported 00:15:35.468 Supported Log Pages Log Page: May Support 00:15:35.468 Commands Supported & Effects Log Page: Not Supported 00:15:35.468 Feature Identifiers & Effects Log Page:May Support 00:15:35.468 NVMe-MI Commands & Effects Log Page: May Support 00:15:35.468 Data Area 4 for Telemetry Log: Not Supported 00:15:35.468 Error Log Page Entries Supported: 1 00:15:35.468 Keep Alive: Not Supported 00:15:35.468 00:15:35.468 NVM Command Set Attributes 00:15:35.468 ========================== 00:15:35.468 Submission Queue Entry Size 00:15:35.468 Max: 64 00:15:35.468 Min: 64 00:15:35.468 Completion Queue Entry Size 00:15:35.468 Max: 16 00:15:35.468 Min: 16 00:15:35.468 Number of Namespaces: 256 00:15:35.468 Compare Command: Supported 00:15:35.468 Write Uncorrectable Command: Not Supported 00:15:35.468 Dataset Management Command: Supported 00:15:35.468 Write Zeroes Command: Supported 00:15:35.468 Set Features Save Field: Supported 00:15:35.468 Reservations: Not Supported 00:15:35.468 Timestamp: Supported 00:15:35.468 Copy: Supported 00:15:35.468 Volatile Write Cache: Present 00:15:35.468 Atomic Write Unit (Normal): 1 00:15:35.468 Atomic Write Unit (PFail): 1 00:15:35.468 Atomic Compare & Write Unit: 1 00:15:35.468 Fused Compare & Write: Not Supported 00:15:35.468 Scatter-Gather List 00:15:35.468 SGL Command Set: Supported 00:15:35.468 SGL Keyed: Not Supported 00:15:35.468 SGL Bit Bucket Descriptor: Not Supported 00:15:35.468 SGL Metadata Pointer: Not Supported 00:15:35.468 Oversized SGL: Not Supported 00:15:35.468 SGL Metadata Address: Not Supported 00:15:35.468 SGL Offset: Not Supported 00:15:35.468 Transport SGL Data Block: Not Supported 00:15:35.468 Replay Protected Memory Block: Not Supported 00:15:35.468 00:15:35.468 Firmware Slot Information 00:15:35.468 ========================= 00:15:35.468 Active slot: 1 00:15:35.468 Slot 1 Firmware Revision: 1.0 00:15:35.468 00:15:35.468 00:15:35.468 Commands Supported and Effects 00:15:35.468 ============================== 00:15:35.468 Admin Commands 00:15:35.468 -------------- 00:15:35.468 Delete I/O Submission Queue (00h): Supported 00:15:35.468 Create I/O Submission Queue (01h): Supported 00:15:35.468 Get Log Page (02h): Supported 00:15:35.468 Delete I/O Completion Queue (04h): Supported 00:15:35.468 Create I/O Completion Queue (05h): Supported 00:15:35.468 Identify (06h): Supported 00:15:35.468 Abort (08h): Supported 00:15:35.468 Set Features (09h): Supported 00:15:35.468 Get Features (0Ah): Supported 00:15:35.468 Asynchronous Event Request (0Ch): Supported 00:15:35.468 Namespace Attachment (15h): Supported NS-Inventory-Change 00:15:35.468 Directive Send (19h): Supported 00:15:35.468 Directive Receive (1Ah): Supported 00:15:35.468 Virtualization Management (1Ch): Supported 00:15:35.468 Doorbell Buffer Config (7Ch): Supported 00:15:35.468 Format NVM (80h): Supported LBA-Change 00:15:35.468 I/O Commands 00:15:35.468 ------------ 00:15:35.468 Flush (00h): Supported LBA-Change 00:15:35.468 Write (01h): Supported LBA-Change 00:15:35.468 Read (02h): Supported 00:15:35.468 Compare (05h): Supported 00:15:35.468 Write Zeroes (08h): Supported LBA-Change 00:15:35.468 Dataset Management (09h): Supported LBA-Change 00:15:35.468 Unknown (0Ch): Supported 00:15:35.468 Unknown (12h): Supported 00:15:35.468 Copy (19h): Supported LBA-Change 00:15:35.468 Unknown (1Dh): Supported LBA-Change 00:15:35.468 00:15:35.468 Error Log 00:15:35.468 ========= 00:15:35.468 00:15:35.468 Arbitration 00:15:35.468 =========== 00:15:35.468 Arbitration Burst: no limit 00:15:35.468 00:15:35.468 Power Management 00:15:35.468 ================ 00:15:35.468 Number of Power States: 1 00:15:35.468 Current Power State: Power State #0 00:15:35.468 Power State #0: 00:15:35.468 Max Power: 25.00 W 00:15:35.468 Non-Operational State: Operational 00:15:35.468 Entry Latency: 16 microseconds 00:15:35.468 Exit Latency: 4 microseconds 00:15:35.468 Relative Read Throughput: 0 00:15:35.468 Relative Read Latency: 0 00:15:35.468 Relative Write Throughput: 0 00:15:35.468 Relative Write Latency: 0 00:15:35.468 Idle Power: Not Reported 00:15:35.468 Active Power: Not Reported 00:15:35.468 Non-Operational Permissive Mode: Not Supported 00:15:35.468 00:15:35.468 Health Information 00:15:35.468 ================== 00:15:35.468 Critical Warnings: 00:15:35.468 Available Spare Space: OK 00:15:35.468 Temperature: OK 00:15:35.468 Device Reliability: OK 00:15:35.468 Read Only: No 00:15:35.468 Volatile Memory Backup: OK 00:15:35.468 Current Temperature: 323 Kelvin (50 Celsius) 00:15:35.468 Temperature Threshold: 343 Kelvin (70 Celsius) 00:15:35.468 Available Spare: 0% 00:15:35.468 Available Spare Threshold: 0% 00:15:35.468 Life Percentage Used: 0% 00:15:35.468 Data Units Read: 697 00:15:35.468 Data Units Written: 589 00:15:35.468 Host Read Commands: 34128 00:15:35.468 Host Write Commands: 33166 00:15:35.468 Controller Busy Time: 0 minutes 00:15:35.468 Power Cycles: 0 00:15:35.468 Power On Hours: 0 hours 00:15:35.468 Unsafe Shutdowns: 0 00:15:35.468 Unrecoverable Media Errors: 0 00:15:35.468 Lifetime Error Log Entries: 0 00:15:35.468 Warning Temperature Time: 0 minutes 00:15:35.468 Critical Temperature Time: 0 minutes 00:15:35.468 00:15:35.468 Number of Queues 00:15:35.468 ================ 00:15:35.468 Number of I/O Submission Queues: 64 00:15:35.468 Number of I/O Completion Queues: 64 00:15:35.468 00:15:35.468 ZNS Specific Controller Data 00:15:35.468 ============================ 00:15:35.468 Zone Append Size Limit: 0 00:15:35.468 00:15:35.468 00:15:35.468 Active Namespaces 00:15:35.468 ================= 00:15:35.468 Namespace ID:1 00:15:35.468 Error Recovery Timeout: Unlimited 00:15:35.468 Command Set Identifier: NVM (00h) 00:15:35.468 Deallocate: Supported 00:15:35.468 Deallocated/Unwritten Error: Supported 00:15:35.468 Deallocated Read Value: All 0x00 00:15:35.468 Deallocate in Write Zeroes: Not Supported 00:15:35.468 Deallocated Guard Field: 0xFFFF 00:15:35.468 Flush: Supported 00:15:35.468 Reservation: Not Supported 00:15:35.468 Metadata Transferred as: Separate Metadata Buffer 00:15:35.468 Namespace Sharing Capabilities: Private 00:15:35.468 Size (in LBAs): 1548666 (5GiB) 00:15:35.468 Capacity (in LBAs): 1548666 (5GiB) 00:15:35.468 Utilization (in LBAs): 1548666 (5GiB) 00:15:35.469 Thin Provisioning: Not Supported 00:15:35.469 Per-NS Atomic Units: No 00:15:35.469 Maximum Single Source Range Length: 128 00:15:35.469 Maximum Copy Length: 128 00:15:35.469 Maximum Source Range Count: 128 00:15:35.469 NGUID/EUI64 Never Reused: No 00:15:35.469 Namespace Write Protected: No 00:15:35.469 Number of LBA Formats: 8 00:15:35.469 Current LBA Format: LBA Format #07 00:15:35.469 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:35.469 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:35.469 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:35.469 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:35.469 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:35.469 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:35.469 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:35.469 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:35.469 00:15:35.469 NVM Specific Namespace Data 00:15:35.469 =========================== 00:15:35.469 Logical Block Storage Tag Mask: 0 00:15:35.469 Protection Information Capabilities: 00:15:35.469 16b Guard Protection Information Storage Tag Support: No 00:15:35.469 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:35.469 Storage Tag Check Read Support: No 00:15:35.469 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.469 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.469 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.469 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.469 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.469 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.469 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.469 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.469 17:16:21 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:15:35.469 17:16:21 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:15:35.727 ===================================================== 00:15:35.727 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:35.727 ===================================================== 00:15:35.727 Controller Capabilities/Features 00:15:35.727 ================================ 00:15:35.727 Vendor ID: 1b36 00:15:35.727 Subsystem Vendor ID: 1af4 00:15:35.727 Serial Number: 12341 00:15:35.727 Model Number: QEMU NVMe Ctrl 00:15:35.727 Firmware Version: 8.0.0 00:15:35.727 Recommended Arb Burst: 6 00:15:35.727 IEEE OUI Identifier: 00 54 52 00:15:35.727 Multi-path I/O 00:15:35.727 May have multiple subsystem ports: No 00:15:35.727 May have multiple controllers: No 00:15:35.727 Associated with SR-IOV VF: No 00:15:35.727 Max Data Transfer Size: 524288 00:15:35.727 Max Number of Namespaces: 256 00:15:35.727 Max Number of I/O Queues: 64 00:15:35.727 NVMe Specification Version (VS): 1.4 00:15:35.727 NVMe Specification Version (Identify): 1.4 00:15:35.727 Maximum Queue Entries: 2048 00:15:35.727 Contiguous Queues Required: Yes 00:15:35.727 Arbitration Mechanisms Supported 00:15:35.727 Weighted Round Robin: Not Supported 00:15:35.727 Vendor Specific: Not Supported 00:15:35.727 Reset Timeout: 7500 ms 00:15:35.727 Doorbell Stride: 4 bytes 00:15:35.727 NVM Subsystem Reset: Not Supported 00:15:35.727 Command Sets Supported 00:15:35.727 NVM Command Set: Supported 00:15:35.727 Boot Partition: Not Supported 00:15:35.727 Memory Page Size Minimum: 4096 bytes 00:15:35.727 Memory Page Size Maximum: 65536 bytes 00:15:35.727 Persistent Memory Region: Not Supported 00:15:35.727 Optional Asynchronous Events Supported 00:15:35.727 Namespace Attribute Notices: Supported 00:15:35.727 Firmware Activation Notices: Not Supported 00:15:35.727 ANA Change Notices: Not Supported 00:15:35.727 PLE Aggregate Log Change Notices: Not Supported 00:15:35.727 LBA Status Info Alert Notices: Not Supported 00:15:35.727 EGE Aggregate Log Change Notices: Not Supported 00:15:35.727 Normal NVM Subsystem Shutdown event: Not Supported 00:15:35.727 Zone Descriptor Change Notices: Not Supported 00:15:35.727 Discovery Log Change Notices: Not Supported 00:15:35.727 Controller Attributes 00:15:35.727 128-bit Host Identifier: Not Supported 00:15:35.727 Non-Operational Permissive Mode: Not Supported 00:15:35.727 NVM Sets: Not Supported 00:15:35.727 Read Recovery Levels: Not Supported 00:15:35.727 Endurance Groups: Not Supported 00:15:35.727 Predictable Latency Mode: Not Supported 00:15:35.727 Traffic Based Keep ALive: Not Supported 00:15:35.727 Namespace Granularity: Not Supported 00:15:35.727 SQ Associations: Not Supported 00:15:35.727 UUID List: Not Supported 00:15:35.727 Multi-Domain Subsystem: Not Supported 00:15:35.727 Fixed Capacity Management: Not Supported 00:15:35.727 Variable Capacity Management: Not Supported 00:15:35.727 Delete Endurance Group: Not Supported 00:15:35.727 Delete NVM Set: Not Supported 00:15:35.727 Extended LBA Formats Supported: Supported 00:15:35.727 Flexible Data Placement Supported: Not Supported 00:15:35.727 00:15:35.727 Controller Memory Buffer Support 00:15:35.727 ================================ 00:15:35.727 Supported: No 00:15:35.727 00:15:35.727 Persistent Memory Region Support 00:15:35.727 ================================ 00:15:35.727 Supported: No 00:15:35.727 00:15:35.727 Admin Command Set Attributes 00:15:35.727 ============================ 00:15:35.727 Security Send/Receive: Not Supported 00:15:35.727 Format NVM: Supported 00:15:35.727 Firmware Activate/Download: Not Supported 00:15:35.727 Namespace Management: Supported 00:15:35.727 Device Self-Test: Not Supported 00:15:35.727 Directives: Supported 00:15:35.727 NVMe-MI: Not Supported 00:15:35.727 Virtualization Management: Not Supported 00:15:35.727 Doorbell Buffer Config: Supported 00:15:35.727 Get LBA Status Capability: Not Supported 00:15:35.727 Command & Feature Lockdown Capability: Not Supported 00:15:35.727 Abort Command Limit: 4 00:15:35.727 Async Event Request Limit: 4 00:15:35.727 Number of Firmware Slots: N/A 00:15:35.727 Firmware Slot 1 Read-Only: N/A 00:15:35.727 Firmware Activation Without Reset: N/A 00:15:35.727 Multiple Update Detection Support: N/A 00:15:35.727 Firmware Update Granularity: No Information Provided 00:15:35.727 Per-Namespace SMART Log: Yes 00:15:35.727 Asymmetric Namespace Access Log Page: Not Supported 00:15:35.727 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:15:35.727 Command Effects Log Page: Supported 00:15:35.727 Get Log Page Extended Data: Supported 00:15:35.727 Telemetry Log Pages: Not Supported 00:15:35.727 Persistent Event Log Pages: Not Supported 00:15:35.727 Supported Log Pages Log Page: May Support 00:15:35.727 Commands Supported & Effects Log Page: Not Supported 00:15:35.727 Feature Identifiers & Effects Log Page:May Support 00:15:35.727 NVMe-MI Commands & Effects Log Page: May Support 00:15:35.727 Data Area 4 for Telemetry Log: Not Supported 00:15:35.727 Error Log Page Entries Supported: 1 00:15:35.727 Keep Alive: Not Supported 00:15:35.727 00:15:35.727 NVM Command Set Attributes 00:15:35.727 ========================== 00:15:35.727 Submission Queue Entry Size 00:15:35.727 Max: 64 00:15:35.727 Min: 64 00:15:35.727 Completion Queue Entry Size 00:15:35.727 Max: 16 00:15:35.727 Min: 16 00:15:35.727 Number of Namespaces: 256 00:15:35.727 Compare Command: Supported 00:15:35.727 Write Uncorrectable Command: Not Supported 00:15:35.727 Dataset Management Command: Supported 00:15:35.727 Write Zeroes Command: Supported 00:15:35.727 Set Features Save Field: Supported 00:15:35.727 Reservations: Not Supported 00:15:35.727 Timestamp: Supported 00:15:35.727 Copy: Supported 00:15:35.727 Volatile Write Cache: Present 00:15:35.727 Atomic Write Unit (Normal): 1 00:15:35.727 Atomic Write Unit (PFail): 1 00:15:35.727 Atomic Compare & Write Unit: 1 00:15:35.727 Fused Compare & Write: Not Supported 00:15:35.727 Scatter-Gather List 00:15:35.727 SGL Command Set: Supported 00:15:35.727 SGL Keyed: Not Supported 00:15:35.727 SGL Bit Bucket Descriptor: Not Supported 00:15:35.727 SGL Metadata Pointer: Not Supported 00:15:35.727 Oversized SGL: Not Supported 00:15:35.727 SGL Metadata Address: Not Supported 00:15:35.727 SGL Offset: Not Supported 00:15:35.727 Transport SGL Data Block: Not Supported 00:15:35.727 Replay Protected Memory Block: Not Supported 00:15:35.727 00:15:35.727 Firmware Slot Information 00:15:35.727 ========================= 00:15:35.727 Active slot: 1 00:15:35.727 Slot 1 Firmware Revision: 1.0 00:15:35.727 00:15:35.727 00:15:35.727 Commands Supported and Effects 00:15:35.727 ============================== 00:15:35.727 Admin Commands 00:15:35.727 -------------- 00:15:35.727 Delete I/O Submission Queue (00h): Supported 00:15:35.727 Create I/O Submission Queue (01h): Supported 00:15:35.727 Get Log Page (02h): Supported 00:15:35.727 Delete I/O Completion Queue (04h): Supported 00:15:35.727 Create I/O Completion Queue (05h): Supported 00:15:35.727 Identify (06h): Supported 00:15:35.727 Abort (08h): Supported 00:15:35.727 Set Features (09h): Supported 00:15:35.727 Get Features (0Ah): Supported 00:15:35.727 Asynchronous Event Request (0Ch): Supported 00:15:35.727 Namespace Attachment (15h): Supported NS-Inventory-Change 00:15:35.728 Directive Send (19h): Supported 00:15:35.728 Directive Receive (1Ah): Supported 00:15:35.728 Virtualization Management (1Ch): Supported 00:15:35.728 Doorbell Buffer Config (7Ch): Supported 00:15:35.728 Format NVM (80h): Supported LBA-Change 00:15:35.728 I/O Commands 00:15:35.728 ------------ 00:15:35.728 Flush (00h): Supported LBA-Change 00:15:35.728 Write (01h): Supported LBA-Change 00:15:35.728 Read (02h): Supported 00:15:35.728 Compare (05h): Supported 00:15:35.728 Write Zeroes (08h): Supported LBA-Change 00:15:35.728 Dataset Management (09h): Supported LBA-Change 00:15:35.728 Unknown (0Ch): Supported 00:15:35.728 Unknown (12h): Supported 00:15:35.728 Copy (19h): Supported LBA-Change 00:15:35.728 Unknown (1Dh): Supported LBA-Change 00:15:35.728 00:15:35.728 Error Log 00:15:35.728 ========= 00:15:35.728 00:15:35.728 Arbitration 00:15:35.728 =========== 00:15:35.728 Arbitration Burst: no limit 00:15:35.728 00:15:35.728 Power Management 00:15:35.728 ================ 00:15:35.728 Number of Power States: 1 00:15:35.728 Current Power State: Power State #0 00:15:35.728 Power State #0: 00:15:35.728 Max Power: 25.00 W 00:15:35.728 Non-Operational State: Operational 00:15:35.728 Entry Latency: 16 microseconds 00:15:35.728 Exit Latency: 4 microseconds 00:15:35.728 Relative Read Throughput: 0 00:15:35.728 Relative Read Latency: 0 00:15:35.728 Relative Write Throughput: 0 00:15:35.728 Relative Write Latency: 0 00:15:35.728 Idle Power: Not Reported 00:15:35.728 Active Power: Not Reported 00:15:35.728 Non-Operational Permissive Mode: Not Supported 00:15:35.728 00:15:35.728 Health Information 00:15:35.728 ================== 00:15:35.728 Critical Warnings: 00:15:35.728 Available Spare Space: OK 00:15:35.728 Temperature: OK 00:15:35.728 Device Reliability: OK 00:15:35.728 Read Only: No 00:15:35.728 Volatile Memory Backup: OK 00:15:35.728 Current Temperature: 323 Kelvin (50 Celsius) 00:15:35.728 Temperature Threshold: 343 Kelvin (70 Celsius) 00:15:35.728 Available Spare: 0% 00:15:35.728 Available Spare Threshold: 0% 00:15:35.728 Life Percentage Used: 0% 00:15:35.728 Data Units Read: 1160 00:15:35.728 Data Units Written: 944 00:15:35.728 Host Read Commands: 51994 00:15:35.728 Host Write Commands: 49083 00:15:35.728 Controller Busy Time: 0 minutes 00:15:35.728 Power Cycles: 0 00:15:35.728 Power On Hours: 0 hours 00:15:35.728 Unsafe Shutdowns: 0 00:15:35.728 Unrecoverable Media Errors: 0 00:15:35.728 Lifetime Error Log Entries: 0 00:15:35.728 Warning Temperature Time: 0 minutes 00:15:35.728 Critical Temperature Time: 0 minutes 00:15:35.728 00:15:35.728 Number of Queues 00:15:35.728 ================ 00:15:35.728 Number of I/O Submission Queues: 64 00:15:35.728 Number of I/O Completion Queues: 64 00:15:35.728 00:15:35.728 ZNS Specific Controller Data 00:15:35.728 ============================ 00:15:35.728 Zone Append Size Limit: 0 00:15:35.728 00:15:35.728 00:15:35.728 Active Namespaces 00:15:35.728 ================= 00:15:35.728 Namespace ID:1 00:15:35.728 Error Recovery Timeout: Unlimited 00:15:35.728 Command Set Identifier: NVM (00h) 00:15:35.728 Deallocate: Supported 00:15:35.728 Deallocated/Unwritten Error: Supported 00:15:35.728 Deallocated Read Value: All 0x00 00:15:35.728 Deallocate in Write Zeroes: Not Supported 00:15:35.728 Deallocated Guard Field: 0xFFFF 00:15:35.728 Flush: Supported 00:15:35.728 Reservation: Not Supported 00:15:35.728 Namespace Sharing Capabilities: Private 00:15:35.728 Size (in LBAs): 1310720 (5GiB) 00:15:35.728 Capacity (in LBAs): 1310720 (5GiB) 00:15:35.728 Utilization (in LBAs): 1310720 (5GiB) 00:15:35.728 Thin Provisioning: Not Supported 00:15:35.728 Per-NS Atomic Units: No 00:15:35.728 Maximum Single Source Range Length: 128 00:15:35.728 Maximum Copy Length: 128 00:15:35.728 Maximum Source Range Count: 128 00:15:35.728 NGUID/EUI64 Never Reused: No 00:15:35.728 Namespace Write Protected: No 00:15:35.728 Number of LBA Formats: 8 00:15:35.728 Current LBA Format: LBA Format #04 00:15:35.728 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:35.728 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:35.728 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:35.728 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:35.728 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:35.728 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:35.728 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:35.728 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:35.728 00:15:35.728 NVM Specific Namespace Data 00:15:35.728 =========================== 00:15:35.728 Logical Block Storage Tag Mask: 0 00:15:35.728 Protection Information Capabilities: 00:15:35.728 16b Guard Protection Information Storage Tag Support: No 00:15:35.728 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:35.728 Storage Tag Check Read Support: No 00:15:35.728 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.728 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.728 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.728 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.728 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.728 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.728 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.728 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:35.985 17:16:21 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:15:35.985 17:16:21 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:15:36.244 ===================================================== 00:15:36.244 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:36.244 ===================================================== 00:15:36.244 Controller Capabilities/Features 00:15:36.244 ================================ 00:15:36.244 Vendor ID: 1b36 00:15:36.244 Subsystem Vendor ID: 1af4 00:15:36.244 Serial Number: 12342 00:15:36.244 Model Number: QEMU NVMe Ctrl 00:15:36.244 Firmware Version: 8.0.0 00:15:36.244 Recommended Arb Burst: 6 00:15:36.244 IEEE OUI Identifier: 00 54 52 00:15:36.244 Multi-path I/O 00:15:36.244 May have multiple subsystem ports: No 00:15:36.244 May have multiple controllers: No 00:15:36.244 Associated with SR-IOV VF: No 00:15:36.244 Max Data Transfer Size: 524288 00:15:36.244 Max Number of Namespaces: 256 00:15:36.244 Max Number of I/O Queues: 64 00:15:36.244 NVMe Specification Version (VS): 1.4 00:15:36.244 NVMe Specification Version (Identify): 1.4 00:15:36.244 Maximum Queue Entries: 2048 00:15:36.244 Contiguous Queues Required: Yes 00:15:36.244 Arbitration Mechanisms Supported 00:15:36.244 Weighted Round Robin: Not Supported 00:15:36.244 Vendor Specific: Not Supported 00:15:36.244 Reset Timeout: 7500 ms 00:15:36.244 Doorbell Stride: 4 bytes 00:15:36.244 NVM Subsystem Reset: Not Supported 00:15:36.244 Command Sets Supported 00:15:36.244 NVM Command Set: Supported 00:15:36.244 Boot Partition: Not Supported 00:15:36.244 Memory Page Size Minimum: 4096 bytes 00:15:36.244 Memory Page Size Maximum: 65536 bytes 00:15:36.244 Persistent Memory Region: Not Supported 00:15:36.244 Optional Asynchronous Events Supported 00:15:36.244 Namespace Attribute Notices: Supported 00:15:36.244 Firmware Activation Notices: Not Supported 00:15:36.244 ANA Change Notices: Not Supported 00:15:36.244 PLE Aggregate Log Change Notices: Not Supported 00:15:36.244 LBA Status Info Alert Notices: Not Supported 00:15:36.244 EGE Aggregate Log Change Notices: Not Supported 00:15:36.244 Normal NVM Subsystem Shutdown event: Not Supported 00:15:36.244 Zone Descriptor Change Notices: Not Supported 00:15:36.244 Discovery Log Change Notices: Not Supported 00:15:36.244 Controller Attributes 00:15:36.244 128-bit Host Identifier: Not Supported 00:15:36.244 Non-Operational Permissive Mode: Not Supported 00:15:36.244 NVM Sets: Not Supported 00:15:36.244 Read Recovery Levels: Not Supported 00:15:36.244 Endurance Groups: Not Supported 00:15:36.244 Predictable Latency Mode: Not Supported 00:15:36.244 Traffic Based Keep ALive: Not Supported 00:15:36.244 Namespace Granularity: Not Supported 00:15:36.244 SQ Associations: Not Supported 00:15:36.244 UUID List: Not Supported 00:15:36.244 Multi-Domain Subsystem: Not Supported 00:15:36.244 Fixed Capacity Management: Not Supported 00:15:36.244 Variable Capacity Management: Not Supported 00:15:36.244 Delete Endurance Group: Not Supported 00:15:36.244 Delete NVM Set: Not Supported 00:15:36.244 Extended LBA Formats Supported: Supported 00:15:36.244 Flexible Data Placement Supported: Not Supported 00:15:36.244 00:15:36.244 Controller Memory Buffer Support 00:15:36.244 ================================ 00:15:36.244 Supported: No 00:15:36.244 00:15:36.244 Persistent Memory Region Support 00:15:36.244 ================================ 00:15:36.244 Supported: No 00:15:36.244 00:15:36.244 Admin Command Set Attributes 00:15:36.244 ============================ 00:15:36.244 Security Send/Receive: Not Supported 00:15:36.244 Format NVM: Supported 00:15:36.244 Firmware Activate/Download: Not Supported 00:15:36.244 Namespace Management: Supported 00:15:36.244 Device Self-Test: Not Supported 00:15:36.244 Directives: Supported 00:15:36.244 NVMe-MI: Not Supported 00:15:36.244 Virtualization Management: Not Supported 00:15:36.244 Doorbell Buffer Config: Supported 00:15:36.244 Get LBA Status Capability: Not Supported 00:15:36.244 Command & Feature Lockdown Capability: Not Supported 00:15:36.244 Abort Command Limit: 4 00:15:36.244 Async Event Request Limit: 4 00:15:36.244 Number of Firmware Slots: N/A 00:15:36.244 Firmware Slot 1 Read-Only: N/A 00:15:36.244 Firmware Activation Without Reset: N/A 00:15:36.244 Multiple Update Detection Support: N/A 00:15:36.244 Firmware Update Granularity: No Information Provided 00:15:36.244 Per-Namespace SMART Log: Yes 00:15:36.244 Asymmetric Namespace Access Log Page: Not Supported 00:15:36.244 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:15:36.244 Command Effects Log Page: Supported 00:15:36.244 Get Log Page Extended Data: Supported 00:15:36.244 Telemetry Log Pages: Not Supported 00:15:36.244 Persistent Event Log Pages: Not Supported 00:15:36.244 Supported Log Pages Log Page: May Support 00:15:36.244 Commands Supported & Effects Log Page: Not Supported 00:15:36.244 Feature Identifiers & Effects Log Page:May Support 00:15:36.244 NVMe-MI Commands & Effects Log Page: May Support 00:15:36.244 Data Area 4 for Telemetry Log: Not Supported 00:15:36.244 Error Log Page Entries Supported: 1 00:15:36.244 Keep Alive: Not Supported 00:15:36.244 00:15:36.244 NVM Command Set Attributes 00:15:36.244 ========================== 00:15:36.244 Submission Queue Entry Size 00:15:36.244 Max: 64 00:15:36.244 Min: 64 00:15:36.244 Completion Queue Entry Size 00:15:36.244 Max: 16 00:15:36.244 Min: 16 00:15:36.244 Number of Namespaces: 256 00:15:36.244 Compare Command: Supported 00:15:36.244 Write Uncorrectable Command: Not Supported 00:15:36.244 Dataset Management Command: Supported 00:15:36.244 Write Zeroes Command: Supported 00:15:36.244 Set Features Save Field: Supported 00:15:36.244 Reservations: Not Supported 00:15:36.244 Timestamp: Supported 00:15:36.244 Copy: Supported 00:15:36.244 Volatile Write Cache: Present 00:15:36.244 Atomic Write Unit (Normal): 1 00:15:36.244 Atomic Write Unit (PFail): 1 00:15:36.244 Atomic Compare & Write Unit: 1 00:15:36.244 Fused Compare & Write: Not Supported 00:15:36.244 Scatter-Gather List 00:15:36.244 SGL Command Set: Supported 00:15:36.244 SGL Keyed: Not Supported 00:15:36.244 SGL Bit Bucket Descriptor: Not Supported 00:15:36.244 SGL Metadata Pointer: Not Supported 00:15:36.244 Oversized SGL: Not Supported 00:15:36.244 SGL Metadata Address: Not Supported 00:15:36.244 SGL Offset: Not Supported 00:15:36.244 Transport SGL Data Block: Not Supported 00:15:36.244 Replay Protected Memory Block: Not Supported 00:15:36.244 00:15:36.244 Firmware Slot Information 00:15:36.244 ========================= 00:15:36.244 Active slot: 1 00:15:36.244 Slot 1 Firmware Revision: 1.0 00:15:36.244 00:15:36.244 00:15:36.244 Commands Supported and Effects 00:15:36.244 ============================== 00:15:36.244 Admin Commands 00:15:36.244 -------------- 00:15:36.244 Delete I/O Submission Queue (00h): Supported 00:15:36.244 Create I/O Submission Queue (01h): Supported 00:15:36.244 Get Log Page (02h): Supported 00:15:36.244 Delete I/O Completion Queue (04h): Supported 00:15:36.244 Create I/O Completion Queue (05h): Supported 00:15:36.244 Identify (06h): Supported 00:15:36.244 Abort (08h): Supported 00:15:36.244 Set Features (09h): Supported 00:15:36.244 Get Features (0Ah): Supported 00:15:36.244 Asynchronous Event Request (0Ch): Supported 00:15:36.244 Namespace Attachment (15h): Supported NS-Inventory-Change 00:15:36.244 Directive Send (19h): Supported 00:15:36.244 Directive Receive (1Ah): Supported 00:15:36.244 Virtualization Management (1Ch): Supported 00:15:36.244 Doorbell Buffer Config (7Ch): Supported 00:15:36.244 Format NVM (80h): Supported LBA-Change 00:15:36.244 I/O Commands 00:15:36.244 ------------ 00:15:36.244 Flush (00h): Supported LBA-Change 00:15:36.244 Write (01h): Supported LBA-Change 00:15:36.244 Read (02h): Supported 00:15:36.244 Compare (05h): Supported 00:15:36.244 Write Zeroes (08h): Supported LBA-Change 00:15:36.244 Dataset Management (09h): Supported LBA-Change 00:15:36.244 Unknown (0Ch): Supported 00:15:36.244 Unknown (12h): Supported 00:15:36.244 Copy (19h): Supported LBA-Change 00:15:36.244 Unknown (1Dh): Supported LBA-Change 00:15:36.244 00:15:36.244 Error Log 00:15:36.244 ========= 00:15:36.244 00:15:36.244 Arbitration 00:15:36.244 =========== 00:15:36.244 Arbitration Burst: no limit 00:15:36.244 00:15:36.244 Power Management 00:15:36.244 ================ 00:15:36.244 Number of Power States: 1 00:15:36.244 Current Power State: Power State #0 00:15:36.244 Power State #0: 00:15:36.244 Max Power: 25.00 W 00:15:36.244 Non-Operational State: Operational 00:15:36.245 Entry Latency: 16 microseconds 00:15:36.245 Exit Latency: 4 microseconds 00:15:36.245 Relative Read Throughput: 0 00:15:36.245 Relative Read Latency: 0 00:15:36.245 Relative Write Throughput: 0 00:15:36.245 Relative Write Latency: 0 00:15:36.245 Idle Power: Not Reported 00:15:36.245 Active Power: Not Reported 00:15:36.245 Non-Operational Permissive Mode: Not Supported 00:15:36.245 00:15:36.245 Health Information 00:15:36.245 ================== 00:15:36.245 Critical Warnings: 00:15:36.245 Available Spare Space: OK 00:15:36.245 Temperature: OK 00:15:36.245 Device Reliability: OK 00:15:36.245 Read Only: No 00:15:36.245 Volatile Memory Backup: OK 00:15:36.245 Current Temperature: 323 Kelvin (50 Celsius) 00:15:36.245 Temperature Threshold: 343 Kelvin (70 Celsius) 00:15:36.245 Available Spare: 0% 00:15:36.245 Available Spare Threshold: 0% 00:15:36.245 Life Percentage Used: 0% 00:15:36.245 Data Units Read: 2281 00:15:36.245 Data Units Written: 1961 00:15:36.245 Host Read Commands: 104622 00:15:36.245 Host Write Commands: 100392 00:15:36.245 Controller Busy Time: 0 minutes 00:15:36.245 Power Cycles: 0 00:15:36.245 Power On Hours: 0 hours 00:15:36.245 Unsafe Shutdowns: 0 00:15:36.245 Unrecoverable Media Errors: 0 00:15:36.245 Lifetime Error Log Entries: 0 00:15:36.245 Warning Temperature Time: 0 minutes 00:15:36.245 Critical Temperature Time: 0 minutes 00:15:36.245 00:15:36.245 Number of Queues 00:15:36.245 ================ 00:15:36.245 Number of I/O Submission Queues: 64 00:15:36.245 Number of I/O Completion Queues: 64 00:15:36.245 00:15:36.245 ZNS Specific Controller Data 00:15:36.245 ============================ 00:15:36.245 Zone Append Size Limit: 0 00:15:36.245 00:15:36.245 00:15:36.245 Active Namespaces 00:15:36.245 ================= 00:15:36.245 Namespace ID:1 00:15:36.245 Error Recovery Timeout: Unlimited 00:15:36.245 Command Set Identifier: NVM (00h) 00:15:36.245 Deallocate: Supported 00:15:36.245 Deallocated/Unwritten Error: Supported 00:15:36.245 Deallocated Read Value: All 0x00 00:15:36.245 Deallocate in Write Zeroes: Not Supported 00:15:36.245 Deallocated Guard Field: 0xFFFF 00:15:36.245 Flush: Supported 00:15:36.245 Reservation: Not Supported 00:15:36.245 Namespace Sharing Capabilities: Private 00:15:36.245 Size (in LBAs): 1048576 (4GiB) 00:15:36.245 Capacity (in LBAs): 1048576 (4GiB) 00:15:36.245 Utilization (in LBAs): 1048576 (4GiB) 00:15:36.245 Thin Provisioning: Not Supported 00:15:36.245 Per-NS Atomic Units: No 00:15:36.245 Maximum Single Source Range Length: 128 00:15:36.245 Maximum Copy Length: 128 00:15:36.245 Maximum Source Range Count: 128 00:15:36.245 NGUID/EUI64 Never Reused: No 00:15:36.245 Namespace Write Protected: No 00:15:36.245 Number of LBA Formats: 8 00:15:36.245 Current LBA Format: LBA Format #04 00:15:36.245 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:36.245 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:36.245 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:36.245 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:36.245 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:36.245 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:36.245 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:36.245 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:36.245 00:15:36.245 NVM Specific Namespace Data 00:15:36.245 =========================== 00:15:36.245 Logical Block Storage Tag Mask: 0 00:15:36.245 Protection Information Capabilities: 00:15:36.245 16b Guard Protection Information Storage Tag Support: No 00:15:36.245 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:36.245 Storage Tag Check Read Support: No 00:15:36.245 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.245 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.245 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.245 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.245 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.245 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.245 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.245 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.245 Namespace ID:2 00:15:36.245 Error Recovery Timeout: Unlimited 00:15:36.245 Command Set Identifier: NVM (00h) 00:15:36.245 Deallocate: Supported 00:15:36.245 Deallocated/Unwritten Error: Supported 00:15:36.245 Deallocated Read Value: All 0x00 00:15:36.245 Deallocate in Write Zeroes: Not Supported 00:15:36.245 Deallocated Guard Field: 0xFFFF 00:15:36.245 Flush: Supported 00:15:36.245 Reservation: Not Supported 00:15:36.245 Namespace Sharing Capabilities: Private 00:15:36.245 Size (in LBAs): 1048576 (4GiB) 00:15:36.245 Capacity (in LBAs): 1048576 (4GiB) 00:15:36.245 Utilization (in LBAs): 1048576 (4GiB) 00:15:36.245 Thin Provisioning: Not Supported 00:15:36.245 Per-NS Atomic Units: No 00:15:36.245 Maximum Single Source Range Length: 128 00:15:36.245 Maximum Copy Length: 128 00:15:36.245 Maximum Source Range Count: 128 00:15:36.245 NGUID/EUI64 Never Reused: No 00:15:36.245 Namespace Write Protected: No 00:15:36.245 Number of LBA Formats: 8 00:15:36.245 Current LBA Format: LBA Format #04 00:15:36.245 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:36.245 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:36.245 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:36.245 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:36.245 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:36.245 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:36.245 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:36.245 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:36.245 00:15:36.245 NVM Specific Namespace Data 00:15:36.245 =========================== 00:15:36.245 Logical Block Storage Tag Mask: 0 00:15:36.245 Protection Information Capabilities: 00:15:36.245 16b Guard Protection Information Storage Tag Support: No 00:15:36.245 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:36.245 Storage Tag Check Read Support: No 00:15:36.245 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.245 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.245 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.245 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.245 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.245 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.245 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.245 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.245 Namespace ID:3 00:15:36.245 Error Recovery Timeout: Unlimited 00:15:36.245 Command Set Identifier: NVM (00h) 00:15:36.245 Deallocate: Supported 00:15:36.245 Deallocated/Unwritten Error: Supported 00:15:36.245 Deallocated Read Value: All 0x00 00:15:36.245 Deallocate in Write Zeroes: Not Supported 00:15:36.245 Deallocated Guard Field: 0xFFFF 00:15:36.245 Flush: Supported 00:15:36.245 Reservation: Not Supported 00:15:36.245 Namespace Sharing Capabilities: Private 00:15:36.245 Size (in LBAs): 1048576 (4GiB) 00:15:36.245 Capacity (in LBAs): 1048576 (4GiB) 00:15:36.245 Utilization (in LBAs): 1048576 (4GiB) 00:15:36.245 Thin Provisioning: Not Supported 00:15:36.245 Per-NS Atomic Units: No 00:15:36.245 Maximum Single Source Range Length: 128 00:15:36.245 Maximum Copy Length: 128 00:15:36.245 Maximum Source Range Count: 128 00:15:36.245 NGUID/EUI64 Never Reused: No 00:15:36.245 Namespace Write Protected: No 00:15:36.245 Number of LBA Formats: 8 00:15:36.245 Current LBA Format: LBA Format #04 00:15:36.245 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:36.245 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:36.245 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:36.245 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:36.245 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:36.245 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:36.245 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:36.245 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:36.245 00:15:36.245 NVM Specific Namespace Data 00:15:36.245 =========================== 00:15:36.245 Logical Block Storage Tag Mask: 0 00:15:36.245 Protection Information Capabilities: 00:15:36.245 16b Guard Protection Information Storage Tag Support: No 00:15:36.245 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:36.245 Storage Tag Check Read Support: No 00:15:36.245 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.245 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.245 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.245 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.246 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.246 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.246 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.246 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.246 17:16:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:15:36.246 17:16:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:15:36.505 ===================================================== 00:15:36.505 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:36.505 ===================================================== 00:15:36.505 Controller Capabilities/Features 00:15:36.505 ================================ 00:15:36.505 Vendor ID: 1b36 00:15:36.505 Subsystem Vendor ID: 1af4 00:15:36.505 Serial Number: 12343 00:15:36.505 Model Number: QEMU NVMe Ctrl 00:15:36.505 Firmware Version: 8.0.0 00:15:36.505 Recommended Arb Burst: 6 00:15:36.505 IEEE OUI Identifier: 00 54 52 00:15:36.505 Multi-path I/O 00:15:36.505 May have multiple subsystem ports: No 00:15:36.505 May have multiple controllers: Yes 00:15:36.505 Associated with SR-IOV VF: No 00:15:36.505 Max Data Transfer Size: 524288 00:15:36.505 Max Number of Namespaces: 256 00:15:36.505 Max Number of I/O Queues: 64 00:15:36.505 NVMe Specification Version (VS): 1.4 00:15:36.505 NVMe Specification Version (Identify): 1.4 00:15:36.505 Maximum Queue Entries: 2048 00:15:36.505 Contiguous Queues Required: Yes 00:15:36.505 Arbitration Mechanisms Supported 00:15:36.505 Weighted Round Robin: Not Supported 00:15:36.505 Vendor Specific: Not Supported 00:15:36.505 Reset Timeout: 7500 ms 00:15:36.505 Doorbell Stride: 4 bytes 00:15:36.505 NVM Subsystem Reset: Not Supported 00:15:36.505 Command Sets Supported 00:15:36.505 NVM Command Set: Supported 00:15:36.505 Boot Partition: Not Supported 00:15:36.505 Memory Page Size Minimum: 4096 bytes 00:15:36.505 Memory Page Size Maximum: 65536 bytes 00:15:36.505 Persistent Memory Region: Not Supported 00:15:36.505 Optional Asynchronous Events Supported 00:15:36.505 Namespace Attribute Notices: Supported 00:15:36.505 Firmware Activation Notices: Not Supported 00:15:36.505 ANA Change Notices: Not Supported 00:15:36.505 PLE Aggregate Log Change Notices: Not Supported 00:15:36.505 LBA Status Info Alert Notices: Not Supported 00:15:36.505 EGE Aggregate Log Change Notices: Not Supported 00:15:36.505 Normal NVM Subsystem Shutdown event: Not Supported 00:15:36.505 Zone Descriptor Change Notices: Not Supported 00:15:36.505 Discovery Log Change Notices: Not Supported 00:15:36.505 Controller Attributes 00:15:36.505 128-bit Host Identifier: Not Supported 00:15:36.505 Non-Operational Permissive Mode: Not Supported 00:15:36.505 NVM Sets: Not Supported 00:15:36.505 Read Recovery Levels: Not Supported 00:15:36.505 Endurance Groups: Supported 00:15:36.505 Predictable Latency Mode: Not Supported 00:15:36.505 Traffic Based Keep ALive: Not Supported 00:15:36.505 Namespace Granularity: Not Supported 00:15:36.505 SQ Associations: Not Supported 00:15:36.505 UUID List: Not Supported 00:15:36.505 Multi-Domain Subsystem: Not Supported 00:15:36.505 Fixed Capacity Management: Not Supported 00:15:36.505 Variable Capacity Management: Not Supported 00:15:36.505 Delete Endurance Group: Not Supported 00:15:36.505 Delete NVM Set: Not Supported 00:15:36.505 Extended LBA Formats Supported: Supported 00:15:36.505 Flexible Data Placement Supported: Supported 00:15:36.505 00:15:36.505 Controller Memory Buffer Support 00:15:36.505 ================================ 00:15:36.505 Supported: No 00:15:36.505 00:15:36.505 Persistent Memory Region Support 00:15:36.505 ================================ 00:15:36.505 Supported: No 00:15:36.505 00:15:36.505 Admin Command Set Attributes 00:15:36.505 ============================ 00:15:36.505 Security Send/Receive: Not Supported 00:15:36.505 Format NVM: Supported 00:15:36.505 Firmware Activate/Download: Not Supported 00:15:36.505 Namespace Management: Supported 00:15:36.505 Device Self-Test: Not Supported 00:15:36.505 Directives: Supported 00:15:36.505 NVMe-MI: Not Supported 00:15:36.505 Virtualization Management: Not Supported 00:15:36.505 Doorbell Buffer Config: Supported 00:15:36.505 Get LBA Status Capability: Not Supported 00:15:36.505 Command & Feature Lockdown Capability: Not Supported 00:15:36.505 Abort Command Limit: 4 00:15:36.505 Async Event Request Limit: 4 00:15:36.505 Number of Firmware Slots: N/A 00:15:36.505 Firmware Slot 1 Read-Only: N/A 00:15:36.505 Firmware Activation Without Reset: N/A 00:15:36.505 Multiple Update Detection Support: N/A 00:15:36.505 Firmware Update Granularity: No Information Provided 00:15:36.505 Per-Namespace SMART Log: Yes 00:15:36.505 Asymmetric Namespace Access Log Page: Not Supported 00:15:36.505 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:15:36.505 Command Effects Log Page: Supported 00:15:36.505 Get Log Page Extended Data: Supported 00:15:36.505 Telemetry Log Pages: Not Supported 00:15:36.505 Persistent Event Log Pages: Not Supported 00:15:36.505 Supported Log Pages Log Page: May Support 00:15:36.505 Commands Supported & Effects Log Page: Not Supported 00:15:36.505 Feature Identifiers & Effects Log Page:May Support 00:15:36.505 NVMe-MI Commands & Effects Log Page: May Support 00:15:36.505 Data Area 4 for Telemetry Log: Not Supported 00:15:36.505 Error Log Page Entries Supported: 1 00:15:36.505 Keep Alive: Not Supported 00:15:36.505 00:15:36.505 NVM Command Set Attributes 00:15:36.505 ========================== 00:15:36.505 Submission Queue Entry Size 00:15:36.505 Max: 64 00:15:36.505 Min: 64 00:15:36.505 Completion Queue Entry Size 00:15:36.505 Max: 16 00:15:36.505 Min: 16 00:15:36.505 Number of Namespaces: 256 00:15:36.505 Compare Command: Supported 00:15:36.505 Write Uncorrectable Command: Not Supported 00:15:36.505 Dataset Management Command: Supported 00:15:36.505 Write Zeroes Command: Supported 00:15:36.505 Set Features Save Field: Supported 00:15:36.505 Reservations: Not Supported 00:15:36.505 Timestamp: Supported 00:15:36.505 Copy: Supported 00:15:36.505 Volatile Write Cache: Present 00:15:36.505 Atomic Write Unit (Normal): 1 00:15:36.505 Atomic Write Unit (PFail): 1 00:15:36.505 Atomic Compare & Write Unit: 1 00:15:36.505 Fused Compare & Write: Not Supported 00:15:36.505 Scatter-Gather List 00:15:36.505 SGL Command Set: Supported 00:15:36.505 SGL Keyed: Not Supported 00:15:36.505 SGL Bit Bucket Descriptor: Not Supported 00:15:36.505 SGL Metadata Pointer: Not Supported 00:15:36.505 Oversized SGL: Not Supported 00:15:36.505 SGL Metadata Address: Not Supported 00:15:36.505 SGL Offset: Not Supported 00:15:36.505 Transport SGL Data Block: Not Supported 00:15:36.505 Replay Protected Memory Block: Not Supported 00:15:36.505 00:15:36.505 Firmware Slot Information 00:15:36.505 ========================= 00:15:36.505 Active slot: 1 00:15:36.505 Slot 1 Firmware Revision: 1.0 00:15:36.505 00:15:36.505 00:15:36.505 Commands Supported and Effects 00:15:36.505 ============================== 00:15:36.505 Admin Commands 00:15:36.505 -------------- 00:15:36.505 Delete I/O Submission Queue (00h): Supported 00:15:36.505 Create I/O Submission Queue (01h): Supported 00:15:36.506 Get Log Page (02h): Supported 00:15:36.506 Delete I/O Completion Queue (04h): Supported 00:15:36.506 Create I/O Completion Queue (05h): Supported 00:15:36.506 Identify (06h): Supported 00:15:36.506 Abort (08h): Supported 00:15:36.506 Set Features (09h): Supported 00:15:36.506 Get Features (0Ah): Supported 00:15:36.506 Asynchronous Event Request (0Ch): Supported 00:15:36.506 Namespace Attachment (15h): Supported NS-Inventory-Change 00:15:36.506 Directive Send (19h): Supported 00:15:36.506 Directive Receive (1Ah): Supported 00:15:36.506 Virtualization Management (1Ch): Supported 00:15:36.506 Doorbell Buffer Config (7Ch): Supported 00:15:36.506 Format NVM (80h): Supported LBA-Change 00:15:36.506 I/O Commands 00:15:36.506 ------------ 00:15:36.506 Flush (00h): Supported LBA-Change 00:15:36.506 Write (01h): Supported LBA-Change 00:15:36.506 Read (02h): Supported 00:15:36.506 Compare (05h): Supported 00:15:36.506 Write Zeroes (08h): Supported LBA-Change 00:15:36.506 Dataset Management (09h): Supported LBA-Change 00:15:36.506 Unknown (0Ch): Supported 00:15:36.506 Unknown (12h): Supported 00:15:36.506 Copy (19h): Supported LBA-Change 00:15:36.506 Unknown (1Dh): Supported LBA-Change 00:15:36.506 00:15:36.506 Error Log 00:15:36.506 ========= 00:15:36.506 00:15:36.506 Arbitration 00:15:36.506 =========== 00:15:36.506 Arbitration Burst: no limit 00:15:36.506 00:15:36.506 Power Management 00:15:36.506 ================ 00:15:36.506 Number of Power States: 1 00:15:36.506 Current Power State: Power State #0 00:15:36.506 Power State #0: 00:15:36.506 Max Power: 25.00 W 00:15:36.506 Non-Operational State: Operational 00:15:36.506 Entry Latency: 16 microseconds 00:15:36.506 Exit Latency: 4 microseconds 00:15:36.506 Relative Read Throughput: 0 00:15:36.506 Relative Read Latency: 0 00:15:36.506 Relative Write Throughput: 0 00:15:36.506 Relative Write Latency: 0 00:15:36.506 Idle Power: Not Reported 00:15:36.506 Active Power: Not Reported 00:15:36.506 Non-Operational Permissive Mode: Not Supported 00:15:36.506 00:15:36.506 Health Information 00:15:36.506 ================== 00:15:36.506 Critical Warnings: 00:15:36.506 Available Spare Space: OK 00:15:36.506 Temperature: OK 00:15:36.506 Device Reliability: OK 00:15:36.506 Read Only: No 00:15:36.506 Volatile Memory Backup: OK 00:15:36.506 Current Temperature: 323 Kelvin (50 Celsius) 00:15:36.506 Temperature Threshold: 343 Kelvin (70 Celsius) 00:15:36.506 Available Spare: 0% 00:15:36.506 Available Spare Threshold: 0% 00:15:36.506 Life Percentage Used: 0% 00:15:36.506 Data Units Read: 814 00:15:36.506 Data Units Written: 708 00:15:36.506 Host Read Commands: 35492 00:15:36.506 Host Write Commands: 34082 00:15:36.506 Controller Busy Time: 0 minutes 00:15:36.506 Power Cycles: 0 00:15:36.506 Power On Hours: 0 hours 00:15:36.506 Unsafe Shutdowns: 0 00:15:36.506 Unrecoverable Media Errors: 0 00:15:36.506 Lifetime Error Log Entries: 0 00:15:36.506 Warning Temperature Time: 0 minutes 00:15:36.506 Critical Temperature Time: 0 minutes 00:15:36.506 00:15:36.506 Number of Queues 00:15:36.506 ================ 00:15:36.506 Number of I/O Submission Queues: 64 00:15:36.506 Number of I/O Completion Queues: 64 00:15:36.506 00:15:36.506 ZNS Specific Controller Data 00:15:36.506 ============================ 00:15:36.506 Zone Append Size Limit: 0 00:15:36.506 00:15:36.506 00:15:36.506 Active Namespaces 00:15:36.506 ================= 00:15:36.506 Namespace ID:1 00:15:36.506 Error Recovery Timeout: Unlimited 00:15:36.506 Command Set Identifier: NVM (00h) 00:15:36.506 Deallocate: Supported 00:15:36.506 Deallocated/Unwritten Error: Supported 00:15:36.506 Deallocated Read Value: All 0x00 00:15:36.506 Deallocate in Write Zeroes: Not Supported 00:15:36.506 Deallocated Guard Field: 0xFFFF 00:15:36.506 Flush: Supported 00:15:36.506 Reservation: Not Supported 00:15:36.506 Namespace Sharing Capabilities: Multiple Controllers 00:15:36.506 Size (in LBAs): 262144 (1GiB) 00:15:36.506 Capacity (in LBAs): 262144 (1GiB) 00:15:36.506 Utilization (in LBAs): 262144 (1GiB) 00:15:36.506 Thin Provisioning: Not Supported 00:15:36.506 Per-NS Atomic Units: No 00:15:36.506 Maximum Single Source Range Length: 128 00:15:36.506 Maximum Copy Length: 128 00:15:36.506 Maximum Source Range Count: 128 00:15:36.506 NGUID/EUI64 Never Reused: No 00:15:36.506 Namespace Write Protected: No 00:15:36.506 Endurance group ID: 1 00:15:36.506 Number of LBA Formats: 8 00:15:36.506 Current LBA Format: LBA Format #04 00:15:36.506 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:36.506 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:36.506 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:36.506 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:36.506 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:36.506 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:36.506 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:36.506 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:36.506 00:15:36.506 Get Feature FDP: 00:15:36.506 ================ 00:15:36.506 Enabled: Yes 00:15:36.506 FDP configuration index: 0 00:15:36.506 00:15:36.506 FDP configurations log page 00:15:36.506 =========================== 00:15:36.506 Number of FDP configurations: 1 00:15:36.506 Version: 0 00:15:36.506 Size: 112 00:15:36.506 FDP Configuration Descriptor: 0 00:15:36.506 Descriptor Size: 96 00:15:36.506 Reclaim Group Identifier format: 2 00:15:36.506 FDP Volatile Write Cache: Not Present 00:15:36.506 FDP Configuration: Valid 00:15:36.506 Vendor Specific Size: 0 00:15:36.506 Number of Reclaim Groups: 2 00:15:36.506 Number of Recalim Unit Handles: 8 00:15:36.506 Max Placement Identifiers: 128 00:15:36.506 Number of Namespaces Suppprted: 256 00:15:36.506 Reclaim unit Nominal Size: 6000000 bytes 00:15:36.506 Estimated Reclaim Unit Time Limit: Not Reported 00:15:36.506 RUH Desc #000: RUH Type: Initially Isolated 00:15:36.506 RUH Desc #001: RUH Type: Initially Isolated 00:15:36.506 RUH Desc #002: RUH Type: Initially Isolated 00:15:36.506 RUH Desc #003: RUH Type: Initially Isolated 00:15:36.506 RUH Desc #004: RUH Type: Initially Isolated 00:15:36.506 RUH Desc #005: RUH Type: Initially Isolated 00:15:36.506 RUH Desc #006: RUH Type: Initially Isolated 00:15:36.506 RUH Desc #007: RUH Type: Initially Isolated 00:15:36.506 00:15:36.506 FDP reclaim unit handle usage log page 00:15:36.506 ====================================== 00:15:36.506 Number of Reclaim Unit Handles: 8 00:15:36.506 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:15:36.506 RUH Usage Desc #001: RUH Attributes: Unused 00:15:36.506 RUH Usage Desc #002: RUH Attributes: Unused 00:15:36.506 RUH Usage Desc #003: RUH Attributes: Unused 00:15:36.506 RUH Usage Desc #004: RUH Attributes: Unused 00:15:36.506 RUH Usage Desc #005: RUH Attributes: Unused 00:15:36.506 RUH Usage Desc #006: RUH Attributes: Unused 00:15:36.506 RUH Usage Desc #007: RUH Attributes: Unused 00:15:36.506 00:15:36.506 FDP statistics log page 00:15:36.506 ======================= 00:15:36.506 Host bytes with metadata written: 447717376 00:15:36.506 Media bytes with metadata written: 447782912 00:15:36.506 Media bytes erased: 0 00:15:36.506 00:15:36.506 FDP events log page 00:15:36.506 =================== 00:15:36.506 Number of FDP events: 0 00:15:36.506 00:15:36.506 NVM Specific Namespace Data 00:15:36.506 =========================== 00:15:36.506 Logical Block Storage Tag Mask: 0 00:15:36.506 Protection Information Capabilities: 00:15:36.506 16b Guard Protection Information Storage Tag Support: No 00:15:36.506 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:36.506 Storage Tag Check Read Support: No 00:15:36.506 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.506 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.506 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.506 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.506 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.506 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.506 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.506 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:36.506 ************************************ 00:15:36.506 END TEST nvme_identify 00:15:36.506 ************************************ 00:15:36.506 00:15:36.506 real 0m1.744s 00:15:36.506 user 0m0.701s 00:15:36.506 sys 0m0.816s 00:15:36.506 17:16:22 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:36.506 17:16:22 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:15:36.506 17:16:22 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:15:36.506 17:16:22 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:36.507 17:16:22 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:36.507 17:16:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.507 ************************************ 00:15:36.507 START TEST nvme_perf 00:15:36.507 ************************************ 00:15:36.507 17:16:22 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:15:36.507 17:16:22 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:15:37.880 Initializing NVMe Controllers 00:15:37.880 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:37.880 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:37.880 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:37.880 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:37.880 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:37.880 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:15:37.880 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:15:37.880 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:15:37.880 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:15:37.880 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:15:37.880 Initialization complete. Launching workers. 00:15:37.880 ======================================================== 00:15:37.880 Latency(us) 00:15:37.880 Device Information : IOPS MiB/s Average min max 00:15:37.881 PCIE (0000:00:10.0) NSID 1 from core 0: 12751.49 149.43 10050.58 8313.68 50121.75 00:15:37.881 PCIE (0000:00:11.0) NSID 1 from core 0: 12751.49 149.43 10023.55 8354.38 46996.40 00:15:37.881 PCIE (0000:00:13.0) NSID 1 from core 0: 12751.49 149.43 9994.29 8401.45 44500.23 00:15:37.881 PCIE (0000:00:12.0) NSID 1 from core 0: 12751.49 149.43 9964.86 8421.78 41417.88 00:15:37.881 PCIE (0000:00:12.0) NSID 2 from core 0: 12815.25 150.18 9886.19 8408.48 33482.04 00:15:37.881 PCIE (0000:00:12.0) NSID 3 from core 0: 12815.25 150.18 9857.70 8387.47 30374.98 00:15:37.881 ======================================================== 00:15:37.881 Total : 76636.47 898.08 9962.71 8313.68 50121.75 00:15:37.881 00:15:37.881 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:15:37.881 ================================================================================= 00:15:37.881 1.00000% : 8579.258us 00:15:37.881 10.00000% : 8936.727us 00:15:37.881 25.00000% : 9234.618us 00:15:37.881 50.00000% : 9651.665us 00:15:37.881 75.00000% : 10068.713us 00:15:37.881 90.00000% : 10783.651us 00:15:37.881 95.00000% : 11260.276us 00:15:37.881 98.00000% : 11915.636us 00:15:37.881 99.00000% : 13166.778us 00:15:37.881 99.50000% : 43372.916us 00:15:37.881 99.90000% : 49807.360us 00:15:37.881 99.99000% : 50283.985us 00:15:37.881 99.99900% : 50283.985us 00:15:37.881 99.99990% : 50283.985us 00:15:37.881 99.99999% : 50283.985us 00:15:37.881 00:15:37.881 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:15:37.881 ================================================================================= 00:15:37.881 1.00000% : 8638.836us 00:15:37.881 10.00000% : 8996.305us 00:15:37.881 25.00000% : 9234.618us 00:15:37.881 50.00000% : 9592.087us 00:15:37.881 75.00000% : 10068.713us 00:15:37.881 90.00000% : 10783.651us 00:15:37.881 95.00000% : 11260.276us 00:15:37.881 98.00000% : 11856.058us 00:15:37.881 99.00000% : 13166.778us 00:15:37.881 99.50000% : 40036.538us 00:15:37.881 99.90000% : 46709.295us 00:15:37.881 99.99000% : 47185.920us 00:15:37.881 99.99900% : 47185.920us 00:15:37.881 99.99990% : 47185.920us 00:15:37.881 99.99999% : 47185.920us 00:15:37.881 00:15:37.881 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:15:37.881 ================================================================================= 00:15:37.881 1.00000% : 8638.836us 00:15:37.881 10.00000% : 8996.305us 00:15:37.881 25.00000% : 9234.618us 00:15:37.881 50.00000% : 9592.087us 00:15:37.881 75.00000% : 10009.135us 00:15:37.881 90.00000% : 10783.651us 00:15:37.881 95.00000% : 11200.698us 00:15:37.881 98.00000% : 11856.058us 00:15:37.881 99.00000% : 13107.200us 00:15:37.881 99.50000% : 37653.411us 00:15:37.881 99.90000% : 44087.855us 00:15:37.881 99.99000% : 44564.480us 00:15:37.881 99.99900% : 44564.480us 00:15:37.881 99.99990% : 44564.480us 00:15:37.881 99.99999% : 44564.480us 00:15:37.881 00:15:37.881 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:15:37.881 ================================================================================= 00:15:37.881 1.00000% : 8698.415us 00:15:37.881 10.00000% : 8996.305us 00:15:37.881 25.00000% : 9234.618us 00:15:37.881 50.00000% : 9592.087us 00:15:37.881 75.00000% : 10009.135us 00:15:37.881 90.00000% : 10783.651us 00:15:37.881 95.00000% : 11200.698us 00:15:37.881 98.00000% : 11915.636us 00:15:37.881 99.00000% : 13166.778us 00:15:37.881 99.50000% : 34555.345us 00:15:37.881 99.90000% : 40989.789us 00:15:37.881 99.99000% : 41466.415us 00:15:37.881 99.99900% : 41466.415us 00:15:37.881 99.99990% : 41466.415us 00:15:37.881 99.99999% : 41466.415us 00:15:37.881 00:15:37.881 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:15:37.881 ================================================================================= 00:15:37.881 1.00000% : 8698.415us 00:15:37.881 10.00000% : 8996.305us 00:15:37.881 25.00000% : 9234.618us 00:15:37.881 50.00000% : 9592.087us 00:15:37.881 75.00000% : 10068.713us 00:15:37.881 90.00000% : 10783.651us 00:15:37.881 95.00000% : 11200.698us 00:15:37.881 98.00000% : 11856.058us 00:15:37.881 99.00000% : 13047.622us 00:15:37.881 99.50000% : 26333.556us 00:15:37.881 99.90000% : 33125.469us 00:15:37.881 99.99000% : 33602.095us 00:15:37.881 99.99900% : 33602.095us 00:15:37.881 99.99990% : 33602.095us 00:15:37.881 99.99999% : 33602.095us 00:15:37.881 00:15:37.881 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:15:37.881 ================================================================================= 00:15:37.881 1.00000% : 8698.415us 00:15:37.881 10.00000% : 8996.305us 00:15:37.881 25.00000% : 9234.618us 00:15:37.881 50.00000% : 9592.087us 00:15:37.881 75.00000% : 10068.713us 00:15:37.881 90.00000% : 10783.651us 00:15:37.881 95.00000% : 11200.698us 00:15:37.881 98.00000% : 11856.058us 00:15:37.881 99.00000% : 12868.887us 00:15:37.881 99.50000% : 23473.804us 00:15:37.881 99.90000% : 29908.247us 00:15:37.881 99.99000% : 30384.873us 00:15:37.881 99.99900% : 30384.873us 00:15:37.881 99.99990% : 30384.873us 00:15:37.881 99.99999% : 30384.873us 00:15:37.881 00:15:37.881 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:15:37.881 ============================================================================== 00:15:37.881 Range in us Cumulative IO count 00:15:37.881 8281.367 - 8340.945: 0.0312% ( 4) 00:15:37.881 8340.945 - 8400.524: 0.1797% ( 19) 00:15:37.881 8400.524 - 8460.102: 0.4375% ( 33) 00:15:37.881 8460.102 - 8519.680: 0.7812% ( 44) 00:15:37.881 8519.680 - 8579.258: 1.3438% ( 72) 00:15:37.881 8579.258 - 8638.836: 2.1016% ( 97) 00:15:37.881 8638.836 - 8698.415: 3.2500% ( 147) 00:15:37.881 8698.415 - 8757.993: 4.7422% ( 191) 00:15:37.881 8757.993 - 8817.571: 6.7812% ( 261) 00:15:37.881 8817.571 - 8877.149: 9.1875% ( 308) 00:15:37.881 8877.149 - 8936.727: 11.9844% ( 358) 00:15:37.881 8936.727 - 8996.305: 14.7812% ( 358) 00:15:37.881 8996.305 - 9055.884: 17.9766% ( 409) 00:15:37.881 9055.884 - 9115.462: 21.1250% ( 403) 00:15:37.881 9115.462 - 9175.040: 24.6172% ( 447) 00:15:37.881 9175.040 - 9234.618: 28.2031% ( 459) 00:15:37.881 9234.618 - 9294.196: 31.7812% ( 458) 00:15:37.881 9294.196 - 9353.775: 35.3984% ( 463) 00:15:37.881 9353.775 - 9413.353: 39.0703% ( 470) 00:15:37.881 9413.353 - 9472.931: 42.6484% ( 458) 00:15:37.881 9472.931 - 9532.509: 46.1875% ( 453) 00:15:37.881 9532.509 - 9592.087: 49.7891% ( 461) 00:15:37.881 9592.087 - 9651.665: 53.3125% ( 451) 00:15:37.881 9651.665 - 9711.244: 56.8516% ( 453) 00:15:37.881 9711.244 - 9770.822: 60.3359% ( 446) 00:15:37.881 9770.822 - 9830.400: 63.7031% ( 431) 00:15:37.881 9830.400 - 9889.978: 66.8438% ( 402) 00:15:37.881 9889.978 - 9949.556: 69.9375% ( 396) 00:15:37.881 9949.556 - 10009.135: 72.7344% ( 358) 00:15:37.881 10009.135 - 10068.713: 75.2109% ( 317) 00:15:37.881 10068.713 - 10128.291: 77.5234% ( 296) 00:15:37.881 10128.291 - 10187.869: 79.5625% ( 261) 00:15:37.881 10187.869 - 10247.447: 81.3281% ( 226) 00:15:37.881 10247.447 - 10307.025: 82.9688% ( 210) 00:15:37.881 10307.025 - 10366.604: 84.2656% ( 166) 00:15:37.881 10366.604 - 10426.182: 85.3672% ( 141) 00:15:37.881 10426.182 - 10485.760: 86.3672% ( 128) 00:15:37.881 10485.760 - 10545.338: 87.3125% ( 121) 00:15:37.881 10545.338 - 10604.916: 88.2266% ( 117) 00:15:37.881 10604.916 - 10664.495: 89.0000% ( 99) 00:15:37.881 10664.495 - 10724.073: 89.6797% ( 87) 00:15:37.881 10724.073 - 10783.651: 90.4062% ( 93) 00:15:37.881 10783.651 - 10843.229: 91.0781% ( 86) 00:15:37.881 10843.229 - 10902.807: 91.6797% ( 77) 00:15:37.881 10902.807 - 10962.385: 92.2969% ( 79) 00:15:37.881 10962.385 - 11021.964: 92.9531% ( 84) 00:15:37.881 11021.964 - 11081.542: 93.5938% ( 82) 00:15:37.881 11081.542 - 11141.120: 94.1094% ( 66) 00:15:37.881 11141.120 - 11200.698: 94.5938% ( 62) 00:15:37.881 11200.698 - 11260.276: 95.1484% ( 71) 00:15:37.881 11260.276 - 11319.855: 95.6328% ( 62) 00:15:37.881 11319.855 - 11379.433: 96.0469% ( 53) 00:15:37.881 11379.433 - 11439.011: 96.3828% ( 43) 00:15:37.881 11439.011 - 11498.589: 96.6875% ( 39) 00:15:37.881 11498.589 - 11558.167: 96.9922% ( 39) 00:15:37.881 11558.167 - 11617.745: 97.2500% ( 33) 00:15:37.881 11617.745 - 11677.324: 97.4609% ( 27) 00:15:37.881 11677.324 - 11736.902: 97.6875% ( 29) 00:15:37.881 11736.902 - 11796.480: 97.8516% ( 21) 00:15:37.881 11796.480 - 11856.058: 97.9844% ( 17) 00:15:37.881 11856.058 - 11915.636: 98.1406% ( 20) 00:15:37.881 11915.636 - 11975.215: 98.2344% ( 12) 00:15:37.881 11975.215 - 12034.793: 98.3125% ( 10) 00:15:37.881 12034.793 - 12094.371: 98.3984% ( 11) 00:15:37.881 12094.371 - 12153.949: 98.4922% ( 12) 00:15:37.881 12153.949 - 12213.527: 98.5391% ( 6) 00:15:37.881 12213.527 - 12273.105: 98.6094% ( 9) 00:15:37.881 12273.105 - 12332.684: 98.6562% ( 6) 00:15:37.881 12332.684 - 12392.262: 98.7031% ( 6) 00:15:37.881 12392.262 - 12451.840: 98.7344% ( 4) 00:15:37.881 12451.840 - 12511.418: 98.7578% ( 3) 00:15:37.881 12511.418 - 12570.996: 98.7969% ( 5) 00:15:37.881 12570.996 - 12630.575: 98.8047% ( 1) 00:15:37.881 12630.575 - 12690.153: 98.8359% ( 4) 00:15:37.881 12690.153 - 12749.731: 98.8594% ( 3) 00:15:37.881 12749.731 - 12809.309: 98.8750% ( 2) 00:15:37.881 12809.309 - 12868.887: 98.9062% ( 4) 00:15:37.881 12868.887 - 12928.465: 98.9141% ( 1) 00:15:37.881 12928.465 - 12988.044: 98.9297% ( 2) 00:15:37.881 12988.044 - 13047.622: 98.9609% ( 4) 00:15:37.881 13047.622 - 13107.200: 98.9922% ( 4) 00:15:37.881 13107.200 - 13166.778: 99.0000% ( 1) 00:15:37.881 40513.164 - 40751.476: 99.0391% ( 5) 00:15:37.881 40751.476 - 40989.789: 99.0859% ( 6) 00:15:37.882 40989.789 - 41228.102: 99.1250% ( 5) 00:15:37.882 41228.102 - 41466.415: 99.1641% ( 5) 00:15:37.882 41466.415 - 41704.727: 99.2109% ( 6) 00:15:37.882 41704.727 - 41943.040: 99.2500% ( 5) 00:15:37.882 41943.040 - 42181.353: 99.3047% ( 7) 00:15:37.882 42181.353 - 42419.665: 99.3516% ( 6) 00:15:37.882 42419.665 - 42657.978: 99.3984% ( 6) 00:15:37.882 42657.978 - 42896.291: 99.4375% ( 5) 00:15:37.882 42896.291 - 43134.604: 99.4922% ( 7) 00:15:37.882 43134.604 - 43372.916: 99.5000% ( 1) 00:15:37.882 47424.233 - 47662.545: 99.5234% ( 3) 00:15:37.882 47662.545 - 47900.858: 99.5703% ( 6) 00:15:37.882 47900.858 - 48139.171: 99.6172% ( 6) 00:15:37.882 48139.171 - 48377.484: 99.6641% ( 6) 00:15:37.882 48377.484 - 48615.796: 99.7031% ( 5) 00:15:37.882 48615.796 - 48854.109: 99.7500% ( 6) 00:15:37.882 48854.109 - 49092.422: 99.8047% ( 7) 00:15:37.882 49092.422 - 49330.735: 99.8516% ( 6) 00:15:37.882 49330.735 - 49569.047: 99.8906% ( 5) 00:15:37.882 49569.047 - 49807.360: 99.9453% ( 7) 00:15:37.882 49807.360 - 50045.673: 99.9844% ( 5) 00:15:37.882 50045.673 - 50283.985: 100.0000% ( 2) 00:15:37.882 00:15:37.882 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:15:37.882 ============================================================================== 00:15:37.882 Range in us Cumulative IO count 00:15:37.882 8340.945 - 8400.524: 0.0469% ( 6) 00:15:37.882 8400.524 - 8460.102: 0.1406% ( 12) 00:15:37.882 8460.102 - 8519.680: 0.3516% ( 27) 00:15:37.882 8519.680 - 8579.258: 0.6016% ( 32) 00:15:37.882 8579.258 - 8638.836: 1.0859% ( 62) 00:15:37.882 8638.836 - 8698.415: 1.8516% ( 98) 00:15:37.882 8698.415 - 8757.993: 2.8984% ( 134) 00:15:37.882 8757.993 - 8817.571: 4.2188% ( 169) 00:15:37.882 8817.571 - 8877.149: 5.9922% ( 227) 00:15:37.882 8877.149 - 8936.727: 8.3359% ( 300) 00:15:37.882 8936.727 - 8996.305: 11.2656% ( 375) 00:15:37.882 8996.305 - 9055.884: 14.4766% ( 411) 00:15:37.882 9055.884 - 9115.462: 17.9688% ( 447) 00:15:37.882 9115.462 - 9175.040: 21.9844% ( 514) 00:15:37.882 9175.040 - 9234.618: 26.0547% ( 521) 00:15:37.882 9234.618 - 9294.196: 30.2500% ( 537) 00:15:37.882 9294.196 - 9353.775: 34.4531% ( 538) 00:15:37.882 9353.775 - 9413.353: 38.8125% ( 558) 00:15:37.882 9413.353 - 9472.931: 43.0000% ( 536) 00:15:37.882 9472.931 - 9532.509: 47.2188% ( 540) 00:15:37.882 9532.509 - 9592.087: 51.1484% ( 503) 00:15:37.882 9592.087 - 9651.665: 55.0703% ( 502) 00:15:37.882 9651.665 - 9711.244: 59.0078% ( 504) 00:15:37.882 9711.244 - 9770.822: 62.7500% ( 479) 00:15:37.882 9770.822 - 9830.400: 66.1016% ( 429) 00:15:37.882 9830.400 - 9889.978: 69.3203% ( 412) 00:15:37.882 9889.978 - 9949.556: 72.2188% ( 371) 00:15:37.882 9949.556 - 10009.135: 74.6406% ( 310) 00:15:37.882 10009.135 - 10068.713: 76.9141% ( 291) 00:15:37.882 10068.713 - 10128.291: 78.7422% ( 234) 00:15:37.882 10128.291 - 10187.869: 80.5391% ( 230) 00:15:37.882 10187.869 - 10247.447: 81.9141% ( 176) 00:15:37.882 10247.447 - 10307.025: 83.2578% ( 172) 00:15:37.882 10307.025 - 10366.604: 84.4531% ( 153) 00:15:37.882 10366.604 - 10426.182: 85.4219% ( 124) 00:15:37.882 10426.182 - 10485.760: 86.3906% ( 124) 00:15:37.882 10485.760 - 10545.338: 87.2109% ( 105) 00:15:37.882 10545.338 - 10604.916: 88.0469% ( 107) 00:15:37.882 10604.916 - 10664.495: 88.8438% ( 102) 00:15:37.882 10664.495 - 10724.073: 89.6484% ( 103) 00:15:37.882 10724.073 - 10783.651: 90.4531% ( 103) 00:15:37.882 10783.651 - 10843.229: 91.1875% ( 94) 00:15:37.882 10843.229 - 10902.807: 91.9297% ( 95) 00:15:37.882 10902.807 - 10962.385: 92.5938% ( 85) 00:15:37.882 10962.385 - 11021.964: 93.2344% ( 82) 00:15:37.882 11021.964 - 11081.542: 93.8906% ( 84) 00:15:37.882 11081.542 - 11141.120: 94.4375% ( 70) 00:15:37.882 11141.120 - 11200.698: 94.9844% ( 70) 00:15:37.882 11200.698 - 11260.276: 95.5000% ( 66) 00:15:37.882 11260.276 - 11319.855: 95.9609% ( 59) 00:15:37.882 11319.855 - 11379.433: 96.3516% ( 50) 00:15:37.882 11379.433 - 11439.011: 96.7031% ( 45) 00:15:37.882 11439.011 - 11498.589: 96.9922% ( 37) 00:15:37.882 11498.589 - 11558.167: 97.2891% ( 38) 00:15:37.882 11558.167 - 11617.745: 97.5078% ( 28) 00:15:37.882 11617.745 - 11677.324: 97.7266% ( 28) 00:15:37.882 11677.324 - 11736.902: 97.8594% ( 17) 00:15:37.882 11736.902 - 11796.480: 97.9531% ( 12) 00:15:37.882 11796.480 - 11856.058: 98.0547% ( 13) 00:15:37.882 11856.058 - 11915.636: 98.1172% ( 8) 00:15:37.882 11915.636 - 11975.215: 98.1875% ( 9) 00:15:37.882 11975.215 - 12034.793: 98.2656% ( 10) 00:15:37.882 12034.793 - 12094.371: 98.3438% ( 10) 00:15:37.882 12094.371 - 12153.949: 98.4609% ( 15) 00:15:37.882 12153.949 - 12213.527: 98.5469% ( 11) 00:15:37.882 12213.527 - 12273.105: 98.5859% ( 5) 00:15:37.882 12273.105 - 12332.684: 98.6406% ( 7) 00:15:37.882 12332.684 - 12392.262: 98.6719% ( 4) 00:15:37.882 12392.262 - 12451.840: 98.6953% ( 3) 00:15:37.882 12451.840 - 12511.418: 98.7266% ( 4) 00:15:37.882 12511.418 - 12570.996: 98.7500% ( 3) 00:15:37.882 12570.996 - 12630.575: 98.7734% ( 3) 00:15:37.882 12630.575 - 12690.153: 98.7969% ( 3) 00:15:37.882 12690.153 - 12749.731: 98.8281% ( 4) 00:15:37.882 12749.731 - 12809.309: 98.8516% ( 3) 00:15:37.882 12809.309 - 12868.887: 98.8828% ( 4) 00:15:37.882 12868.887 - 12928.465: 98.9141% ( 4) 00:15:37.882 12928.465 - 12988.044: 98.9375% ( 3) 00:15:37.882 12988.044 - 13047.622: 98.9688% ( 4) 00:15:37.882 13047.622 - 13107.200: 98.9922% ( 3) 00:15:37.882 13107.200 - 13166.778: 99.0000% ( 1) 00:15:37.882 37415.098 - 37653.411: 99.0234% ( 3) 00:15:37.882 37653.411 - 37891.724: 99.0625% ( 5) 00:15:37.882 37891.724 - 38130.036: 99.1016% ( 5) 00:15:37.882 38130.036 - 38368.349: 99.1562% ( 7) 00:15:37.882 38368.349 - 38606.662: 99.2031% ( 6) 00:15:37.882 38606.662 - 38844.975: 99.2578% ( 7) 00:15:37.882 38844.975 - 39083.287: 99.3047% ( 6) 00:15:37.882 39083.287 - 39321.600: 99.3516% ( 6) 00:15:37.882 39321.600 - 39559.913: 99.3984% ( 6) 00:15:37.882 39559.913 - 39798.225: 99.4453% ( 6) 00:15:37.882 39798.225 - 40036.538: 99.5000% ( 7) 00:15:37.882 44326.167 - 44564.480: 99.5078% ( 1) 00:15:37.882 44564.480 - 44802.793: 99.5547% ( 6) 00:15:37.882 44802.793 - 45041.105: 99.6094% ( 7) 00:15:37.882 45041.105 - 45279.418: 99.6406% ( 4) 00:15:37.882 45279.418 - 45517.731: 99.6875% ( 6) 00:15:37.882 45517.731 - 45756.044: 99.7422% ( 7) 00:15:37.882 45756.044 - 45994.356: 99.7891% ( 6) 00:15:37.882 45994.356 - 46232.669: 99.8438% ( 7) 00:15:37.882 46232.669 - 46470.982: 99.8906% ( 6) 00:15:37.882 46470.982 - 46709.295: 99.9297% ( 5) 00:15:37.882 46709.295 - 46947.607: 99.9844% ( 7) 00:15:37.882 46947.607 - 47185.920: 100.0000% ( 2) 00:15:37.882 00:15:37.882 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:15:37.882 ============================================================================== 00:15:37.882 Range in us Cumulative IO count 00:15:37.882 8400.524 - 8460.102: 0.0625% ( 8) 00:15:37.882 8460.102 - 8519.680: 0.2656% ( 26) 00:15:37.882 8519.680 - 8579.258: 0.5391% ( 35) 00:15:37.882 8579.258 - 8638.836: 1.0078% ( 60) 00:15:37.882 8638.836 - 8698.415: 1.7109% ( 90) 00:15:37.882 8698.415 - 8757.993: 2.8359% ( 144) 00:15:37.882 8757.993 - 8817.571: 4.1484% ( 168) 00:15:37.882 8817.571 - 8877.149: 6.1094% ( 251) 00:15:37.882 8877.149 - 8936.727: 8.5703% ( 315) 00:15:37.882 8936.727 - 8996.305: 11.4453% ( 368) 00:15:37.882 8996.305 - 9055.884: 14.8047% ( 430) 00:15:37.882 9055.884 - 9115.462: 18.2734% ( 444) 00:15:37.882 9115.462 - 9175.040: 22.1641% ( 498) 00:15:37.882 9175.040 - 9234.618: 26.0781% ( 501) 00:15:37.882 9234.618 - 9294.196: 30.0859% ( 513) 00:15:37.882 9294.196 - 9353.775: 34.4766% ( 562) 00:15:37.882 9353.775 - 9413.353: 38.7031% ( 541) 00:15:37.882 9413.353 - 9472.931: 43.0078% ( 551) 00:15:37.882 9472.931 - 9532.509: 47.2812% ( 547) 00:15:37.882 9532.509 - 9592.087: 51.3438% ( 520) 00:15:37.882 9592.087 - 9651.665: 55.4844% ( 530) 00:15:37.882 9651.665 - 9711.244: 59.3438% ( 494) 00:15:37.882 9711.244 - 9770.822: 63.0938% ( 480) 00:15:37.882 9770.822 - 9830.400: 66.5781% ( 446) 00:15:37.882 9830.400 - 9889.978: 69.8828% ( 423) 00:15:37.882 9889.978 - 9949.556: 72.6250% ( 351) 00:15:37.882 9949.556 - 10009.135: 75.0625% ( 312) 00:15:37.882 10009.135 - 10068.713: 77.1484% ( 267) 00:15:37.882 10068.713 - 10128.291: 79.0938% ( 249) 00:15:37.882 10128.291 - 10187.869: 80.7812% ( 216) 00:15:37.882 10187.869 - 10247.447: 82.1797% ( 179) 00:15:37.882 10247.447 - 10307.025: 83.4922% ( 168) 00:15:37.882 10307.025 - 10366.604: 84.6641% ( 150) 00:15:37.882 10366.604 - 10426.182: 85.6953% ( 132) 00:15:37.882 10426.182 - 10485.760: 86.6641% ( 124) 00:15:37.882 10485.760 - 10545.338: 87.5625% ( 115) 00:15:37.882 10545.338 - 10604.916: 88.3828% ( 105) 00:15:37.882 10604.916 - 10664.495: 89.1016% ( 92) 00:15:37.882 10664.495 - 10724.073: 89.7969% ( 89) 00:15:37.882 10724.073 - 10783.651: 90.5391% ( 95) 00:15:37.882 10783.651 - 10843.229: 91.3125% ( 99) 00:15:37.882 10843.229 - 10902.807: 92.0000% ( 88) 00:15:37.882 10902.807 - 10962.385: 92.7188% ( 92) 00:15:37.882 10962.385 - 11021.964: 93.3594% ( 82) 00:15:37.882 11021.964 - 11081.542: 93.9844% ( 80) 00:15:37.883 11081.542 - 11141.120: 94.5469% ( 72) 00:15:37.883 11141.120 - 11200.698: 95.0234% ( 61) 00:15:37.883 11200.698 - 11260.276: 95.4766% ( 58) 00:15:37.883 11260.276 - 11319.855: 95.9141% ( 56) 00:15:37.883 11319.855 - 11379.433: 96.3047% ( 50) 00:15:37.883 11379.433 - 11439.011: 96.6562% ( 45) 00:15:37.883 11439.011 - 11498.589: 96.9609% ( 39) 00:15:37.883 11498.589 - 11558.167: 97.1953% ( 30) 00:15:37.883 11558.167 - 11617.745: 97.4297% ( 30) 00:15:37.883 11617.745 - 11677.324: 97.6094% ( 23) 00:15:37.883 11677.324 - 11736.902: 97.7734% ( 21) 00:15:37.883 11736.902 - 11796.480: 97.9062% ( 17) 00:15:37.883 11796.480 - 11856.058: 98.0312% ( 16) 00:15:37.883 11856.058 - 11915.636: 98.1328% ( 13) 00:15:37.883 11915.636 - 11975.215: 98.2266% ( 12) 00:15:37.883 11975.215 - 12034.793: 98.3359% ( 14) 00:15:37.883 12034.793 - 12094.371: 98.4062% ( 9) 00:15:37.883 12094.371 - 12153.949: 98.4844% ( 10) 00:15:37.883 12153.949 - 12213.527: 98.5469% ( 8) 00:15:37.883 12213.527 - 12273.105: 98.6016% ( 7) 00:15:37.883 12273.105 - 12332.684: 98.6562% ( 7) 00:15:37.883 12332.684 - 12392.262: 98.6953% ( 5) 00:15:37.883 12392.262 - 12451.840: 98.7188% ( 3) 00:15:37.883 12451.840 - 12511.418: 98.7422% ( 3) 00:15:37.883 12511.418 - 12570.996: 98.7656% ( 3) 00:15:37.883 12570.996 - 12630.575: 98.7969% ( 4) 00:15:37.883 12630.575 - 12690.153: 98.8203% ( 3) 00:15:37.883 12690.153 - 12749.731: 98.8516% ( 4) 00:15:37.883 12749.731 - 12809.309: 98.8750% ( 3) 00:15:37.883 12809.309 - 12868.887: 98.9062% ( 4) 00:15:37.883 12868.887 - 12928.465: 98.9297% ( 3) 00:15:37.883 12928.465 - 12988.044: 98.9609% ( 4) 00:15:37.883 12988.044 - 13047.622: 98.9922% ( 4) 00:15:37.883 13047.622 - 13107.200: 99.0000% ( 1) 00:15:37.883 35031.971 - 35270.284: 99.0391% ( 5) 00:15:37.883 35270.284 - 35508.596: 99.0781% ( 5) 00:15:37.883 35508.596 - 35746.909: 99.1250% ( 6) 00:15:37.883 35746.909 - 35985.222: 99.1797% ( 7) 00:15:37.883 35985.222 - 36223.535: 99.2266% ( 6) 00:15:37.883 36223.535 - 36461.847: 99.2734% ( 6) 00:15:37.883 36461.847 - 36700.160: 99.3203% ( 6) 00:15:37.883 36700.160 - 36938.473: 99.3672% ( 6) 00:15:37.883 36938.473 - 37176.785: 99.4141% ( 6) 00:15:37.883 37176.785 - 37415.098: 99.4609% ( 6) 00:15:37.883 37415.098 - 37653.411: 99.5000% ( 5) 00:15:37.883 41943.040 - 42181.353: 99.5312% ( 4) 00:15:37.883 42181.353 - 42419.665: 99.5781% ( 6) 00:15:37.883 42419.665 - 42657.978: 99.6250% ( 6) 00:15:37.883 42657.978 - 42896.291: 99.6641% ( 5) 00:15:37.883 42896.291 - 43134.604: 99.7188% ( 7) 00:15:37.883 43134.604 - 43372.916: 99.7656% ( 6) 00:15:37.883 43372.916 - 43611.229: 99.8203% ( 7) 00:15:37.883 43611.229 - 43849.542: 99.8672% ( 6) 00:15:37.883 43849.542 - 44087.855: 99.9141% ( 6) 00:15:37.883 44087.855 - 44326.167: 99.9609% ( 6) 00:15:37.883 44326.167 - 44564.480: 100.0000% ( 5) 00:15:37.883 00:15:37.883 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:15:37.883 ============================================================================== 00:15:37.883 Range in us Cumulative IO count 00:15:37.883 8400.524 - 8460.102: 0.0703% ( 9) 00:15:37.883 8460.102 - 8519.680: 0.2266% ( 20) 00:15:37.883 8519.680 - 8579.258: 0.4609% ( 30) 00:15:37.883 8579.258 - 8638.836: 0.8750% ( 53) 00:15:37.883 8638.836 - 8698.415: 1.5312% ( 84) 00:15:37.883 8698.415 - 8757.993: 2.5625% ( 132) 00:15:37.883 8757.993 - 8817.571: 3.9688% ( 180) 00:15:37.883 8817.571 - 8877.149: 5.9844% ( 258) 00:15:37.883 8877.149 - 8936.727: 8.3516% ( 303) 00:15:37.883 8936.727 - 8996.305: 11.0156% ( 341) 00:15:37.883 8996.305 - 9055.884: 14.3203% ( 423) 00:15:37.883 9055.884 - 9115.462: 18.0078% ( 472) 00:15:37.883 9115.462 - 9175.040: 21.9375% ( 503) 00:15:37.883 9175.040 - 9234.618: 25.9375% ( 512) 00:15:37.883 9234.618 - 9294.196: 30.1797% ( 543) 00:15:37.883 9294.196 - 9353.775: 34.3828% ( 538) 00:15:37.883 9353.775 - 9413.353: 38.9141% ( 580) 00:15:37.883 9413.353 - 9472.931: 43.1641% ( 544) 00:15:37.883 9472.931 - 9532.509: 47.4766% ( 552) 00:15:37.883 9532.509 - 9592.087: 51.5469% ( 521) 00:15:37.883 9592.087 - 9651.665: 55.6406% ( 524) 00:15:37.883 9651.665 - 9711.244: 59.4922% ( 493) 00:15:37.883 9711.244 - 9770.822: 63.1641% ( 470) 00:15:37.883 9770.822 - 9830.400: 66.6094% ( 441) 00:15:37.883 9830.400 - 9889.978: 69.8281% ( 412) 00:15:37.883 9889.978 - 9949.556: 72.7891% ( 379) 00:15:37.883 9949.556 - 10009.135: 75.1719% ( 305) 00:15:37.883 10009.135 - 10068.713: 77.3594% ( 280) 00:15:37.883 10068.713 - 10128.291: 79.2891% ( 247) 00:15:37.883 10128.291 - 10187.869: 80.9766% ( 216) 00:15:37.883 10187.869 - 10247.447: 82.4922% ( 194) 00:15:37.883 10247.447 - 10307.025: 83.7500% ( 161) 00:15:37.883 10307.025 - 10366.604: 84.8359% ( 139) 00:15:37.883 10366.604 - 10426.182: 85.7891% ( 122) 00:15:37.883 10426.182 - 10485.760: 86.7031% ( 117) 00:15:37.883 10485.760 - 10545.338: 87.4844% ( 100) 00:15:37.883 10545.338 - 10604.916: 88.2969% ( 104) 00:15:37.883 10604.916 - 10664.495: 89.0469% ( 96) 00:15:37.883 10664.495 - 10724.073: 89.8047% ( 97) 00:15:37.883 10724.073 - 10783.651: 90.5625% ( 97) 00:15:37.883 10783.651 - 10843.229: 91.3438% ( 100) 00:15:37.883 10843.229 - 10902.807: 92.0625% ( 92) 00:15:37.883 10902.807 - 10962.385: 92.7656% ( 90) 00:15:37.883 10962.385 - 11021.964: 93.4219% ( 84) 00:15:37.883 11021.964 - 11081.542: 94.0156% ( 76) 00:15:37.883 11081.542 - 11141.120: 94.5703% ( 71) 00:15:37.883 11141.120 - 11200.698: 95.0703% ( 64) 00:15:37.883 11200.698 - 11260.276: 95.5312% ( 59) 00:15:37.883 11260.276 - 11319.855: 95.9141% ( 49) 00:15:37.883 11319.855 - 11379.433: 96.2734% ( 46) 00:15:37.883 11379.433 - 11439.011: 96.6016% ( 42) 00:15:37.883 11439.011 - 11498.589: 96.9141% ( 40) 00:15:37.883 11498.589 - 11558.167: 97.1797% ( 34) 00:15:37.883 11558.167 - 11617.745: 97.3984% ( 28) 00:15:37.883 11617.745 - 11677.324: 97.6094% ( 27) 00:15:37.883 11677.324 - 11736.902: 97.7734% ( 21) 00:15:37.883 11736.902 - 11796.480: 97.8750% ( 13) 00:15:37.883 11796.480 - 11856.058: 97.9922% ( 15) 00:15:37.883 11856.058 - 11915.636: 98.0547% ( 8) 00:15:37.883 11915.636 - 11975.215: 98.1250% ( 9) 00:15:37.883 11975.215 - 12034.793: 98.1953% ( 9) 00:15:37.883 12034.793 - 12094.371: 98.2656% ( 9) 00:15:37.883 12094.371 - 12153.949: 98.3516% ( 11) 00:15:37.883 12153.949 - 12213.527: 98.4297% ( 10) 00:15:37.883 12213.527 - 12273.105: 98.4766% ( 6) 00:15:37.883 12273.105 - 12332.684: 98.5391% ( 8) 00:15:37.883 12332.684 - 12392.262: 98.5938% ( 7) 00:15:37.883 12392.262 - 12451.840: 98.6406% ( 6) 00:15:37.883 12451.840 - 12511.418: 98.6875% ( 6) 00:15:37.883 12511.418 - 12570.996: 98.7344% ( 6) 00:15:37.883 12570.996 - 12630.575: 98.7734% ( 5) 00:15:37.883 12630.575 - 12690.153: 98.8047% ( 4) 00:15:37.883 12690.153 - 12749.731: 98.8281% ( 3) 00:15:37.883 12749.731 - 12809.309: 98.8594% ( 4) 00:15:37.883 12809.309 - 12868.887: 98.8828% ( 3) 00:15:37.883 12868.887 - 12928.465: 98.9062% ( 3) 00:15:37.883 12928.465 - 12988.044: 98.9375% ( 4) 00:15:37.883 12988.044 - 13047.622: 98.9609% ( 3) 00:15:37.883 13047.622 - 13107.200: 98.9922% ( 4) 00:15:37.883 13107.200 - 13166.778: 99.0000% ( 1) 00:15:37.883 31933.905 - 32172.218: 99.0078% ( 1) 00:15:37.883 32172.218 - 32410.531: 99.0547% ( 6) 00:15:37.883 32410.531 - 32648.844: 99.1016% ( 6) 00:15:37.883 32648.844 - 32887.156: 99.1562% ( 7) 00:15:37.883 32887.156 - 33125.469: 99.2031% ( 6) 00:15:37.883 33125.469 - 33363.782: 99.2500% ( 6) 00:15:37.883 33363.782 - 33602.095: 99.3047% ( 7) 00:15:37.883 33602.095 - 33840.407: 99.3516% ( 6) 00:15:37.883 33840.407 - 34078.720: 99.3984% ( 6) 00:15:37.883 34078.720 - 34317.033: 99.4453% ( 6) 00:15:37.883 34317.033 - 34555.345: 99.5000% ( 7) 00:15:37.883 38844.975 - 39083.287: 99.5156% ( 2) 00:15:37.883 39083.287 - 39321.600: 99.5547% ( 5) 00:15:37.883 39321.600 - 39559.913: 99.6094% ( 7) 00:15:37.883 39559.913 - 39798.225: 99.6562% ( 6) 00:15:37.883 39798.225 - 40036.538: 99.7109% ( 7) 00:15:37.883 40036.538 - 40274.851: 99.7578% ( 6) 00:15:37.883 40274.851 - 40513.164: 99.8047% ( 6) 00:15:37.883 40513.164 - 40751.476: 99.8594% ( 7) 00:15:37.883 40751.476 - 40989.789: 99.9062% ( 6) 00:15:37.883 40989.789 - 41228.102: 99.9531% ( 6) 00:15:37.883 41228.102 - 41466.415: 100.0000% ( 6) 00:15:37.883 00:15:37.883 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:15:37.883 ============================================================================== 00:15:37.883 Range in us Cumulative IO count 00:15:37.883 8400.524 - 8460.102: 0.0466% ( 6) 00:15:37.883 8460.102 - 8519.680: 0.1788% ( 17) 00:15:37.883 8519.680 - 8579.258: 0.3887% ( 27) 00:15:37.883 8579.258 - 8638.836: 0.7618% ( 48) 00:15:37.883 8638.836 - 8698.415: 1.3837% ( 80) 00:15:37.883 8698.415 - 8757.993: 2.2777% ( 115) 00:15:37.883 8757.993 - 8817.571: 3.8324% ( 200) 00:15:37.883 8817.571 - 8877.149: 5.8225% ( 256) 00:15:37.883 8877.149 - 8936.727: 8.1001% ( 293) 00:15:37.883 8936.727 - 8996.305: 10.8598% ( 355) 00:15:37.883 8996.305 - 9055.884: 14.2879% ( 441) 00:15:37.883 9055.884 - 9115.462: 17.8405% ( 457) 00:15:37.883 9115.462 - 9175.040: 21.6340% ( 488) 00:15:37.883 9175.040 - 9234.618: 25.7929% ( 535) 00:15:37.884 9234.618 - 9294.196: 29.9907% ( 540) 00:15:37.884 9294.196 - 9353.775: 34.2506% ( 548) 00:15:37.884 9353.775 - 9413.353: 38.5028% ( 547) 00:15:37.884 9413.353 - 9472.931: 42.7861% ( 551) 00:15:37.884 9472.931 - 9532.509: 47.0460% ( 548) 00:15:37.884 9532.509 - 9592.087: 51.1427% ( 527) 00:15:37.884 9592.087 - 9651.665: 55.2705% ( 531) 00:15:37.884 9651.665 - 9711.244: 59.2973% ( 518) 00:15:37.884 9711.244 - 9770.822: 62.8420% ( 456) 00:15:37.884 9770.822 - 9830.400: 66.3013% ( 445) 00:15:37.884 9830.400 - 9889.978: 69.4263% ( 402) 00:15:37.884 9889.978 - 9949.556: 72.3647% ( 378) 00:15:37.884 9949.556 - 10009.135: 74.8523% ( 320) 00:15:37.884 10009.135 - 10068.713: 77.1688% ( 298) 00:15:37.884 10068.713 - 10128.291: 79.1123% ( 250) 00:15:37.884 10128.291 - 10187.869: 80.7603% ( 212) 00:15:37.884 10187.869 - 10247.447: 82.2606% ( 193) 00:15:37.884 10247.447 - 10307.025: 83.5976% ( 172) 00:15:37.884 10307.025 - 10366.604: 84.7248% ( 145) 00:15:37.884 10366.604 - 10426.182: 85.7587% ( 133) 00:15:37.884 10426.182 - 10485.760: 86.6449% ( 114) 00:15:37.884 10485.760 - 10545.338: 87.4456% ( 103) 00:15:37.884 10545.338 - 10604.916: 88.2696% ( 106) 00:15:37.884 10604.916 - 10664.495: 89.0625% ( 102) 00:15:37.884 10664.495 - 10724.073: 89.8088% ( 96) 00:15:37.884 10724.073 - 10783.651: 90.5628% ( 97) 00:15:37.884 10783.651 - 10843.229: 91.2935% ( 94) 00:15:37.884 10843.229 - 10902.807: 92.0476% ( 97) 00:15:37.884 10902.807 - 10962.385: 92.7783% ( 94) 00:15:37.884 10962.385 - 11021.964: 93.4701% ( 89) 00:15:37.884 11021.964 - 11081.542: 94.0454% ( 74) 00:15:37.884 11081.542 - 11141.120: 94.5740% ( 68) 00:15:37.884 11141.120 - 11200.698: 95.1026% ( 68) 00:15:37.884 11200.698 - 11260.276: 95.5302% ( 55) 00:15:37.884 11260.276 - 11319.855: 95.8800% ( 45) 00:15:37.884 11319.855 - 11379.433: 96.2298% ( 45) 00:15:37.884 11379.433 - 11439.011: 96.5641% ( 43) 00:15:37.884 11439.011 - 11498.589: 96.8905% ( 42) 00:15:37.884 11498.589 - 11558.167: 97.1393% ( 32) 00:15:37.884 11558.167 - 11617.745: 97.3803% ( 31) 00:15:37.884 11617.745 - 11677.324: 97.6213% ( 31) 00:15:37.884 11677.324 - 11736.902: 97.7845% ( 21) 00:15:37.884 11736.902 - 11796.480: 97.9167% ( 17) 00:15:37.884 11796.480 - 11856.058: 98.0410% ( 16) 00:15:37.884 11856.058 - 11915.636: 98.1343% ( 12) 00:15:37.884 11915.636 - 11975.215: 98.2276% ( 12) 00:15:37.884 11975.215 - 12034.793: 98.2976% ( 9) 00:15:37.884 12034.793 - 12094.371: 98.3675% ( 9) 00:15:37.884 12094.371 - 12153.949: 98.4220% ( 7) 00:15:37.884 12153.949 - 12213.527: 98.4919% ( 9) 00:15:37.884 12213.527 - 12273.105: 98.5463% ( 7) 00:15:37.884 12273.105 - 12332.684: 98.5852% ( 5) 00:15:37.884 12332.684 - 12392.262: 98.6085% ( 3) 00:15:37.884 12392.262 - 12451.840: 98.6396% ( 4) 00:15:37.884 12451.840 - 12511.418: 98.6785% ( 5) 00:15:37.884 12511.418 - 12570.996: 98.7174% ( 5) 00:15:37.884 12570.996 - 12630.575: 98.7562% ( 5) 00:15:37.884 12630.575 - 12690.153: 98.7873% ( 4) 00:15:37.884 12690.153 - 12749.731: 98.8417% ( 7) 00:15:37.884 12749.731 - 12809.309: 98.8884% ( 6) 00:15:37.884 12809.309 - 12868.887: 98.9350% ( 6) 00:15:37.884 12868.887 - 12928.465: 98.9739% ( 5) 00:15:37.884 12928.465 - 12988.044: 98.9972% ( 3) 00:15:37.884 12988.044 - 13047.622: 99.0050% ( 1) 00:15:37.884 23592.960 - 23712.116: 99.0127% ( 1) 00:15:37.884 23712.116 - 23831.273: 99.0283% ( 2) 00:15:37.884 23831.273 - 23950.429: 99.0516% ( 3) 00:15:37.884 23950.429 - 24069.585: 99.0749% ( 3) 00:15:37.884 24069.585 - 24188.742: 99.0983% ( 3) 00:15:37.884 24188.742 - 24307.898: 99.1216% ( 3) 00:15:37.884 24307.898 - 24427.055: 99.1449% ( 3) 00:15:37.884 24427.055 - 24546.211: 99.1682% ( 3) 00:15:37.884 24546.211 - 24665.367: 99.1915% ( 3) 00:15:37.884 24665.367 - 24784.524: 99.2149% ( 3) 00:15:37.884 24784.524 - 24903.680: 99.2382% ( 3) 00:15:37.884 24903.680 - 25022.836: 99.2537% ( 2) 00:15:37.884 25022.836 - 25141.993: 99.2771% ( 3) 00:15:37.884 25141.993 - 25261.149: 99.3004% ( 3) 00:15:37.884 25261.149 - 25380.305: 99.3237% ( 3) 00:15:37.884 25380.305 - 25499.462: 99.3470% ( 3) 00:15:37.884 25499.462 - 25618.618: 99.3703% ( 3) 00:15:37.884 25618.618 - 25737.775: 99.3937% ( 3) 00:15:37.884 25737.775 - 25856.931: 99.4170% ( 3) 00:15:37.884 25856.931 - 25976.087: 99.4403% ( 3) 00:15:37.884 25976.087 - 26095.244: 99.4636% ( 3) 00:15:37.884 26095.244 - 26214.400: 99.4792% ( 2) 00:15:37.884 26214.400 - 26333.556: 99.5025% ( 3) 00:15:37.884 30742.342 - 30980.655: 99.5103% ( 1) 00:15:37.884 30980.655 - 31218.967: 99.5569% ( 6) 00:15:37.884 31218.967 - 31457.280: 99.6035% ( 6) 00:15:37.884 31457.280 - 31695.593: 99.6424% ( 5) 00:15:37.884 31695.593 - 31933.905: 99.6968% ( 7) 00:15:37.884 31933.905 - 32172.218: 99.7357% ( 5) 00:15:37.884 32172.218 - 32410.531: 99.7901% ( 7) 00:15:37.884 32410.531 - 32648.844: 99.8368% ( 6) 00:15:37.884 32648.844 - 32887.156: 99.8834% ( 6) 00:15:37.884 32887.156 - 33125.469: 99.9300% ( 6) 00:15:37.884 33125.469 - 33363.782: 99.9767% ( 6) 00:15:37.884 33363.782 - 33602.095: 100.0000% ( 3) 00:15:37.884 00:15:37.884 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:15:37.884 ============================================================================== 00:15:37.884 Range in us Cumulative IO count 00:15:37.884 8340.945 - 8400.524: 0.0155% ( 2) 00:15:37.884 8400.524 - 8460.102: 0.0622% ( 6) 00:15:37.884 8460.102 - 8519.680: 0.2021% ( 18) 00:15:37.884 8519.680 - 8579.258: 0.4509% ( 32) 00:15:37.884 8579.258 - 8638.836: 0.9328% ( 62) 00:15:37.884 8638.836 - 8698.415: 1.5236% ( 76) 00:15:37.884 8698.415 - 8757.993: 2.5187% ( 128) 00:15:37.884 8757.993 - 8817.571: 3.8635% ( 173) 00:15:37.884 8817.571 - 8877.149: 5.7136% ( 238) 00:15:37.884 8877.149 - 8936.727: 7.9602% ( 289) 00:15:37.884 8936.727 - 8996.305: 10.6887% ( 351) 00:15:37.884 8996.305 - 9055.884: 14.0236% ( 429) 00:15:37.884 9055.884 - 9115.462: 17.7861% ( 484) 00:15:37.884 9115.462 - 9175.040: 21.6029% ( 491) 00:15:37.884 9175.040 - 9234.618: 25.5752% ( 511) 00:15:37.884 9234.618 - 9294.196: 29.7963% ( 543) 00:15:37.884 9294.196 - 9353.775: 34.1418% ( 559) 00:15:37.884 9353.775 - 9413.353: 38.5028% ( 561) 00:15:37.884 9413.353 - 9472.931: 42.7705% ( 549) 00:15:37.884 9472.931 - 9532.509: 46.9527% ( 538) 00:15:37.884 9532.509 - 9592.087: 51.0028% ( 521) 00:15:37.884 9592.087 - 9651.665: 55.0684% ( 523) 00:15:37.884 9651.665 - 9711.244: 58.9785% ( 503) 00:15:37.884 9711.244 - 9770.822: 62.6943% ( 478) 00:15:37.884 9770.822 - 9830.400: 66.2002% ( 451) 00:15:37.884 9830.400 - 9889.978: 69.3408% ( 404) 00:15:37.884 9889.978 - 9949.556: 72.2170% ( 370) 00:15:37.884 9949.556 - 10009.135: 74.7823% ( 330) 00:15:37.884 10009.135 - 10068.713: 77.0522% ( 292) 00:15:37.884 10068.713 - 10128.291: 79.0578% ( 258) 00:15:37.884 10128.291 - 10187.869: 80.6670% ( 207) 00:15:37.884 10187.869 - 10247.447: 82.1206% ( 187) 00:15:37.884 10247.447 - 10307.025: 83.4344% ( 169) 00:15:37.884 10307.025 - 10366.604: 84.5149% ( 139) 00:15:37.884 10366.604 - 10426.182: 85.4944% ( 126) 00:15:37.884 10426.182 - 10485.760: 86.4661% ( 125) 00:15:37.884 10485.760 - 10545.338: 87.3134% ( 109) 00:15:37.884 10545.338 - 10604.916: 88.1530% ( 108) 00:15:37.884 10604.916 - 10664.495: 88.9070% ( 97) 00:15:37.884 10664.495 - 10724.073: 89.6611% ( 97) 00:15:37.884 10724.073 - 10783.651: 90.4229% ( 98) 00:15:37.884 10783.651 - 10843.229: 91.2313% ( 104) 00:15:37.884 10843.229 - 10902.807: 91.9932% ( 98) 00:15:37.884 10902.807 - 10962.385: 92.7550% ( 98) 00:15:37.884 10962.385 - 11021.964: 93.4468% ( 89) 00:15:37.884 11021.964 - 11081.542: 94.0687% ( 80) 00:15:37.884 11081.542 - 11141.120: 94.5818% ( 66) 00:15:37.884 11141.120 - 11200.698: 95.0715% ( 63) 00:15:37.884 11200.698 - 11260.276: 95.5224% ( 58) 00:15:37.884 11260.276 - 11319.855: 95.9655% ( 57) 00:15:37.884 11319.855 - 11379.433: 96.3542% ( 50) 00:15:37.884 11379.433 - 11439.011: 96.6884% ( 43) 00:15:37.884 11439.011 - 11498.589: 96.9450% ( 33) 00:15:37.884 11498.589 - 11558.167: 97.1937% ( 32) 00:15:37.884 11558.167 - 11617.745: 97.4192% ( 29) 00:15:37.884 11617.745 - 11677.324: 97.6213% ( 26) 00:15:37.884 11677.324 - 11736.902: 97.7767% ( 20) 00:15:37.884 11736.902 - 11796.480: 97.9322% ( 20) 00:15:37.884 11796.480 - 11856.058: 98.0877% ( 20) 00:15:37.884 11856.058 - 11915.636: 98.2043% ( 15) 00:15:37.884 11915.636 - 11975.215: 98.2743% ( 9) 00:15:37.884 11975.215 - 12034.793: 98.3520% ( 10) 00:15:37.884 12034.793 - 12094.371: 98.4297% ( 10) 00:15:37.884 12094.371 - 12153.949: 98.5075% ( 10) 00:15:37.884 12153.949 - 12213.527: 98.5774% ( 9) 00:15:37.884 12213.527 - 12273.105: 98.6396% ( 8) 00:15:37.884 12273.105 - 12332.684: 98.7018% ( 8) 00:15:37.884 12332.684 - 12392.262: 98.7484% ( 6) 00:15:37.884 12392.262 - 12451.840: 98.7873% ( 5) 00:15:37.884 12451.840 - 12511.418: 98.8340% ( 6) 00:15:37.884 12511.418 - 12570.996: 98.8806% ( 6) 00:15:37.884 12570.996 - 12630.575: 98.9117% ( 4) 00:15:37.884 12630.575 - 12690.153: 98.9428% ( 4) 00:15:37.884 12690.153 - 12749.731: 98.9661% ( 3) 00:15:37.884 12749.731 - 12809.309: 98.9894% ( 3) 00:15:37.884 12809.309 - 12868.887: 99.0050% ( 2) 00:15:37.884 20852.364 - 20971.520: 99.0283% ( 3) 00:15:37.884 20971.520 - 21090.676: 99.0438% ( 2) 00:15:37.884 21090.676 - 21209.833: 99.0827% ( 5) 00:15:37.884 21209.833 - 21328.989: 99.0983% ( 2) 00:15:37.884 21328.989 - 21448.145: 99.1216% ( 3) 00:15:37.884 21448.145 - 21567.302: 99.1449% ( 3) 00:15:37.885 21567.302 - 21686.458: 99.1682% ( 3) 00:15:37.885 21686.458 - 21805.615: 99.1915% ( 3) 00:15:37.885 21805.615 - 21924.771: 99.2149% ( 3) 00:15:37.885 21924.771 - 22043.927: 99.2382% ( 3) 00:15:37.885 22043.927 - 22163.084: 99.2615% ( 3) 00:15:37.885 22163.084 - 22282.240: 99.2771% ( 2) 00:15:37.885 22282.240 - 22401.396: 99.3081% ( 4) 00:15:37.885 22401.396 - 22520.553: 99.3315% ( 3) 00:15:37.885 22520.553 - 22639.709: 99.3548% ( 3) 00:15:37.885 22639.709 - 22758.865: 99.3781% ( 3) 00:15:37.885 22758.865 - 22878.022: 99.3937% ( 2) 00:15:37.885 22878.022 - 22997.178: 99.4170% ( 3) 00:15:37.885 22997.178 - 23116.335: 99.4403% ( 3) 00:15:37.885 23116.335 - 23235.491: 99.4636% ( 3) 00:15:37.885 23235.491 - 23354.647: 99.4869% ( 3) 00:15:37.885 23354.647 - 23473.804: 99.5025% ( 2) 00:15:37.885 27882.589 - 28001.745: 99.5180% ( 2) 00:15:37.885 28001.745 - 28120.902: 99.5414% ( 3) 00:15:37.885 28120.902 - 28240.058: 99.5647% ( 3) 00:15:37.885 28240.058 - 28359.215: 99.5880% ( 3) 00:15:37.885 28359.215 - 28478.371: 99.6191% ( 4) 00:15:37.885 28478.371 - 28597.527: 99.6424% ( 3) 00:15:37.885 28597.527 - 28716.684: 99.6657% ( 3) 00:15:37.885 28716.684 - 28835.840: 99.6891% ( 3) 00:15:37.885 28835.840 - 28954.996: 99.7124% ( 3) 00:15:37.885 28954.996 - 29074.153: 99.7357% ( 3) 00:15:37.885 29074.153 - 29193.309: 99.7590% ( 3) 00:15:37.885 29193.309 - 29312.465: 99.7823% ( 3) 00:15:37.885 29312.465 - 29431.622: 99.8057% ( 3) 00:15:37.885 29431.622 - 29550.778: 99.8368% ( 4) 00:15:37.885 29550.778 - 29669.935: 99.8601% ( 3) 00:15:37.885 29669.935 - 29789.091: 99.8834% ( 3) 00:15:37.885 29789.091 - 29908.247: 99.9067% ( 3) 00:15:37.885 29908.247 - 30027.404: 99.9300% ( 3) 00:15:37.885 30027.404 - 30146.560: 99.9456% ( 2) 00:15:37.885 30146.560 - 30265.716: 99.9767% ( 4) 00:15:37.885 30265.716 - 30384.873: 100.0000% ( 3) 00:15:37.885 00:15:37.885 17:16:24 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:15:39.261 Initializing NVMe Controllers 00:15:39.261 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:39.261 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:39.261 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:39.261 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:39.261 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:39.261 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:15:39.261 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:15:39.261 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:15:39.261 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:15:39.261 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:15:39.261 Initialization complete. Launching workers. 00:15:39.261 ======================================================== 00:15:39.261 Latency(us) 00:15:39.261 Device Information : IOPS MiB/s Average min max 00:15:39.261 PCIE (0000:00:10.0) NSID 1 from core 0: 12211.39 143.10 10503.29 8181.76 40020.11 00:15:39.261 PCIE (0000:00:11.0) NSID 1 from core 0: 12211.39 143.10 10483.54 8729.03 37590.18 00:15:39.261 PCIE (0000:00:13.0) NSID 1 from core 0: 12211.39 143.10 10462.55 8569.72 36744.33 00:15:39.261 PCIE (0000:00:12.0) NSID 1 from core 0: 12211.39 143.10 10440.65 8681.90 34838.04 00:15:39.261 PCIE (0000:00:12.0) NSID 2 from core 0: 12211.39 143.10 10417.42 8613.89 32995.00 00:15:39.261 PCIE (0000:00:12.0) NSID 3 from core 0: 12211.39 143.10 10394.82 8567.00 30878.07 00:15:39.261 ======================================================== 00:15:39.261 Total : 73268.31 858.61 10450.38 8181.76 40020.11 00:15:39.261 00:15:39.261 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:15:39.261 ================================================================================= 00:15:39.261 1.00000% : 8877.149us 00:15:39.261 10.00000% : 9353.775us 00:15:39.261 25.00000% : 9651.665us 00:15:39.261 50.00000% : 10128.291us 00:15:39.261 75.00000% : 10724.073us 00:15:39.261 90.00000% : 11558.167us 00:15:39.261 95.00000% : 12034.793us 00:15:39.261 98.00000% : 13107.200us 00:15:39.261 99.00000% : 28835.840us 00:15:39.261 99.50000% : 37891.724us 00:15:39.261 99.90000% : 39559.913us 00:15:39.261 99.99000% : 40036.538us 00:15:39.261 99.99900% : 40036.538us 00:15:39.261 99.99990% : 40036.538us 00:15:39.261 99.99999% : 40036.538us 00:15:39.261 00:15:39.261 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:15:39.261 ================================================================================= 00:15:39.261 1.00000% : 8996.305us 00:15:39.261 10.00000% : 9413.353us 00:15:39.262 25.00000% : 9711.244us 00:15:39.262 50.00000% : 10128.291us 00:15:39.262 75.00000% : 10664.495us 00:15:39.262 90.00000% : 11677.324us 00:15:39.262 95.00000% : 11975.215us 00:15:39.262 98.00000% : 13226.356us 00:15:39.262 99.00000% : 28240.058us 00:15:39.262 99.50000% : 35746.909us 00:15:39.262 99.90000% : 37415.098us 00:15:39.262 99.99000% : 37653.411us 00:15:39.262 99.99900% : 37653.411us 00:15:39.262 99.99990% : 37653.411us 00:15:39.262 99.99999% : 37653.411us 00:15:39.262 00:15:39.262 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:15:39.262 ================================================================================= 00:15:39.262 1.00000% : 9115.462us 00:15:39.262 10.00000% : 9413.353us 00:15:39.262 25.00000% : 9711.244us 00:15:39.262 50.00000% : 10128.291us 00:15:39.262 75.00000% : 10664.495us 00:15:39.262 90.00000% : 11617.745us 00:15:39.262 95.00000% : 11915.636us 00:15:39.262 98.00000% : 12868.887us 00:15:39.262 99.00000% : 26929.338us 00:15:39.262 99.50000% : 34793.658us 00:15:39.262 99.90000% : 36461.847us 00:15:39.262 99.99000% : 36938.473us 00:15:39.262 99.99900% : 36938.473us 00:15:39.262 99.99990% : 36938.473us 00:15:39.262 99.99999% : 36938.473us 00:15:39.262 00:15:39.262 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:15:39.262 ================================================================================= 00:15:39.262 1.00000% : 9115.462us 00:15:39.262 10.00000% : 9413.353us 00:15:39.262 25.00000% : 9711.244us 00:15:39.262 50.00000% : 10128.291us 00:15:39.262 75.00000% : 10664.495us 00:15:39.262 90.00000% : 11558.167us 00:15:39.262 95.00000% : 11915.636us 00:15:39.262 98.00000% : 12570.996us 00:15:39.262 99.00000% : 24903.680us 00:15:39.262 99.50000% : 32887.156us 00:15:39.262 99.90000% : 34555.345us 00:15:39.262 99.99000% : 35031.971us 00:15:39.262 99.99900% : 35031.971us 00:15:39.262 99.99990% : 35031.971us 00:15:39.262 99.99999% : 35031.971us 00:15:39.262 00:15:39.262 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:15:39.262 ================================================================================= 00:15:39.262 1.00000% : 9055.884us 00:15:39.262 10.00000% : 9413.353us 00:15:39.262 25.00000% : 9711.244us 00:15:39.262 50.00000% : 10068.713us 00:15:39.262 75.00000% : 10664.495us 00:15:39.262 90.00000% : 11558.167us 00:15:39.262 95.00000% : 11915.636us 00:15:39.262 98.00000% : 12451.840us 00:15:39.262 99.00000% : 23116.335us 00:15:39.262 99.50000% : 29669.935us 00:15:39.262 99.90000% : 32648.844us 00:15:39.262 99.99000% : 33125.469us 00:15:39.262 99.99900% : 33125.469us 00:15:39.262 99.99990% : 33125.469us 00:15:39.262 99.99999% : 33125.469us 00:15:39.262 00:15:39.262 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:15:39.262 ================================================================================= 00:15:39.262 1.00000% : 9055.884us 00:15:39.262 10.00000% : 9413.353us 00:15:39.262 25.00000% : 9651.665us 00:15:39.262 50.00000% : 10128.291us 00:15:39.262 75.00000% : 10664.495us 00:15:39.262 90.00000% : 11617.745us 00:15:39.262 95.00000% : 11915.636us 00:15:39.262 98.00000% : 12332.684us 00:15:39.262 99.00000% : 20733.207us 00:15:39.262 99.50000% : 27644.276us 00:15:39.262 99.90000% : 30504.029us 00:15:39.262 99.99000% : 30980.655us 00:15:39.262 99.99900% : 30980.655us 00:15:39.262 99.99990% : 30980.655us 00:15:39.262 99.99999% : 30980.655us 00:15:39.262 00:15:39.262 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:15:39.262 ============================================================================== 00:15:39.262 Range in us Cumulative IO count 00:15:39.262 8162.211 - 8221.789: 0.0164% ( 2) 00:15:39.262 8221.789 - 8281.367: 0.0409% ( 3) 00:15:39.262 8281.367 - 8340.945: 0.0573% ( 2) 00:15:39.262 8340.945 - 8400.524: 0.0900% ( 4) 00:15:39.262 8400.524 - 8460.102: 0.1227% ( 4) 00:15:39.262 8460.102 - 8519.680: 0.1882% ( 8) 00:15:39.262 8519.680 - 8579.258: 0.3190% ( 16) 00:15:39.262 8579.258 - 8638.836: 0.4254% ( 13) 00:15:39.262 8638.836 - 8698.415: 0.5236% ( 12) 00:15:39.262 8698.415 - 8757.993: 0.6135% ( 11) 00:15:39.262 8757.993 - 8817.571: 0.7772% ( 20) 00:15:39.262 8817.571 - 8877.149: 1.1535% ( 46) 00:15:39.262 8877.149 - 8936.727: 1.6770% ( 64) 00:15:39.262 8936.727 - 8996.305: 2.3315% ( 80) 00:15:39.262 8996.305 - 9055.884: 3.1823% ( 104) 00:15:39.262 9055.884 - 9115.462: 4.3766% ( 146) 00:15:39.262 9115.462 - 9175.040: 5.6774% ( 159) 00:15:39.262 9175.040 - 9234.618: 7.5180% ( 225) 00:15:39.262 9234.618 - 9294.196: 9.6859% ( 265) 00:15:39.262 9294.196 - 9353.775: 12.2219% ( 310) 00:15:39.262 9353.775 - 9413.353: 14.2997% ( 254) 00:15:39.262 9413.353 - 9472.931: 16.8194% ( 308) 00:15:39.262 9472.931 - 9532.509: 19.4863% ( 326) 00:15:39.262 9532.509 - 9592.087: 22.7503% ( 399) 00:15:39.262 9592.087 - 9651.665: 26.3743% ( 443) 00:15:39.262 9651.665 - 9711.244: 29.5402% ( 387) 00:15:39.262 9711.244 - 9770.822: 32.8861% ( 409) 00:15:39.262 9770.822 - 9830.400: 36.2156% ( 407) 00:15:39.262 9830.400 - 9889.978: 39.4797% ( 399) 00:15:39.262 9889.978 - 9949.556: 42.5474% ( 375) 00:15:39.262 9949.556 - 10009.135: 45.8770% ( 407) 00:15:39.262 10009.135 - 10068.713: 49.3865% ( 429) 00:15:39.262 10068.713 - 10128.291: 52.1024% ( 332) 00:15:39.262 10128.291 - 10187.869: 54.6548% ( 312) 00:15:39.262 10187.869 - 10247.447: 57.1008% ( 299) 00:15:39.262 10247.447 - 10307.025: 59.7677% ( 326) 00:15:39.262 10307.025 - 10366.604: 62.7536% ( 365) 00:15:39.262 10366.604 - 10426.182: 65.6741% ( 357) 00:15:39.262 10426.182 - 10485.760: 67.9565% ( 279) 00:15:39.262 10485.760 - 10545.338: 69.8626% ( 233) 00:15:39.262 10545.338 - 10604.916: 72.0795% ( 271) 00:15:39.262 10604.916 - 10664.495: 74.1329% ( 251) 00:15:39.262 10664.495 - 10724.073: 75.5972% ( 179) 00:15:39.262 10724.073 - 10783.651: 76.8815% ( 157) 00:15:39.262 10783.651 - 10843.229: 78.1577% ( 156) 00:15:39.262 10843.229 - 10902.807: 79.4421% ( 157) 00:15:39.262 10902.807 - 10962.385: 80.6365% ( 146) 00:15:39.262 10962.385 - 11021.964: 81.4627% ( 101) 00:15:39.262 11021.964 - 11081.542: 82.4853% ( 125) 00:15:39.262 11081.542 - 11141.120: 83.4179% ( 114) 00:15:39.262 11141.120 - 11200.698: 84.2768% ( 105) 00:15:39.262 11200.698 - 11260.276: 85.0295% ( 92) 00:15:39.262 11260.276 - 11319.855: 85.9702% ( 115) 00:15:39.262 11319.855 - 11379.433: 87.0910% ( 137) 00:15:39.262 11379.433 - 11439.011: 88.6207% ( 187) 00:15:39.262 11439.011 - 11498.589: 89.8151% ( 146) 00:15:39.262 11498.589 - 11558.167: 90.8541% ( 127) 00:15:39.262 11558.167 - 11617.745: 91.7212% ( 106) 00:15:39.262 11617.745 - 11677.324: 92.4738% ( 92) 00:15:39.262 11677.324 - 11736.902: 92.9565% ( 59) 00:15:39.262 11736.902 - 11796.480: 93.5209% ( 69) 00:15:39.262 11796.480 - 11856.058: 93.9627% ( 54) 00:15:39.262 11856.058 - 11915.636: 94.3308% ( 45) 00:15:39.262 11915.636 - 11975.215: 94.6662% ( 41) 00:15:39.262 11975.215 - 12034.793: 95.0180% ( 43) 00:15:39.262 12034.793 - 12094.371: 95.2961% ( 34) 00:15:39.262 12094.371 - 12153.949: 95.6479% ( 43) 00:15:39.262 12153.949 - 12213.527: 96.0160% ( 45) 00:15:39.262 12213.527 - 12273.105: 96.3596% ( 42) 00:15:39.262 12273.105 - 12332.684: 96.6623% ( 37) 00:15:39.262 12332.684 - 12392.262: 96.9159% ( 31) 00:15:39.262 12392.262 - 12451.840: 97.2022% ( 35) 00:15:39.262 12451.840 - 12511.418: 97.4231% ( 27) 00:15:39.262 12511.418 - 12570.996: 97.6276% ( 25) 00:15:39.262 12570.996 - 12630.575: 97.7503% ( 15) 00:15:39.262 12630.575 - 12690.153: 97.8076% ( 7) 00:15:39.262 12690.153 - 12749.731: 97.8321% ( 3) 00:15:39.262 12749.731 - 12809.309: 97.8567% ( 3) 00:15:39.262 12809.309 - 12868.887: 97.8894% ( 4) 00:15:39.262 12868.887 - 12928.465: 97.9058% ( 2) 00:15:39.262 12988.044 - 13047.622: 97.9548% ( 6) 00:15:39.262 13047.622 - 13107.200: 98.0039% ( 6) 00:15:39.262 13107.200 - 13166.778: 98.0203% ( 2) 00:15:39.262 13166.778 - 13226.356: 98.0366% ( 2) 00:15:39.262 13226.356 - 13285.935: 98.0776% ( 5) 00:15:39.262 13285.935 - 13345.513: 98.1021% ( 3) 00:15:39.262 13345.513 - 13405.091: 98.1348% ( 4) 00:15:39.262 13405.091 - 13464.669: 98.1921% ( 7) 00:15:39.262 13464.669 - 13524.247: 98.2575% ( 8) 00:15:39.262 13822.138 - 13881.716: 98.2739% ( 2) 00:15:39.262 13881.716 - 13941.295: 98.2902% ( 2) 00:15:39.262 13941.295 - 14000.873: 98.3066% ( 2) 00:15:39.262 14000.873 - 14060.451: 98.3230% ( 2) 00:15:39.262 14060.451 - 14120.029: 98.3475% ( 3) 00:15:39.262 14120.029 - 14179.607: 98.3721% ( 3) 00:15:39.262 14179.607 - 14239.185: 98.4211% ( 6) 00:15:39.262 14239.185 - 14298.764: 98.4784% ( 7) 00:15:39.262 14298.764 - 14358.342: 98.5357% ( 7) 00:15:39.262 14358.342 - 14417.920: 98.5602% ( 3) 00:15:39.262 14417.920 - 14477.498: 98.5848% ( 3) 00:15:39.262 14477.498 - 14537.076: 98.6093% ( 3) 00:15:39.262 14537.076 - 14596.655: 98.6420% ( 4) 00:15:39.262 14596.655 - 14656.233: 98.6666% ( 3) 00:15:39.262 14656.233 - 14715.811: 98.7320% ( 8) 00:15:39.262 14715.811 - 14775.389: 98.7974% ( 8) 00:15:39.262 14775.389 - 14834.967: 98.8547% ( 7) 00:15:39.262 14834.967 - 14894.545: 98.8711% ( 2) 00:15:39.262 14894.545 - 14954.124: 98.8793% ( 1) 00:15:39.262 14954.124 - 15013.702: 98.8874% ( 1) 00:15:39.262 15371.171 - 15490.327: 98.9120% ( 3) 00:15:39.262 15490.327 - 15609.484: 98.9529% ( 5) 00:15:39.262 28597.527 - 28716.684: 98.9856% ( 4) 00:15:39.262 28716.684 - 28835.840: 99.0020% ( 2) 00:15:39.262 28835.840 - 28954.996: 99.0183% ( 2) 00:15:39.263 28954.996 - 29074.153: 99.0510% ( 4) 00:15:39.263 29074.153 - 29193.309: 99.0756% ( 3) 00:15:39.263 29193.309 - 29312.465: 99.0920% ( 2) 00:15:39.263 29312.465 - 29431.622: 99.1165% ( 3) 00:15:39.263 29431.622 - 29550.778: 99.1329% ( 2) 00:15:39.263 29550.778 - 29669.935: 99.1574% ( 3) 00:15:39.263 29669.935 - 29789.091: 99.1819% ( 3) 00:15:39.263 29789.091 - 29908.247: 99.2065% ( 3) 00:15:39.263 29908.247 - 30027.404: 99.2228% ( 2) 00:15:39.263 30027.404 - 30146.560: 99.2474% ( 3) 00:15:39.263 30146.560 - 30265.716: 99.2801% ( 4) 00:15:39.263 30265.716 - 30384.873: 99.2965% ( 2) 00:15:39.263 30384.873 - 30504.029: 99.3292% ( 4) 00:15:39.263 30504.029 - 30742.342: 99.3783% ( 6) 00:15:39.263 30742.342 - 30980.655: 99.4274% ( 6) 00:15:39.263 30980.655 - 31218.967: 99.4683% ( 5) 00:15:39.263 31218.967 - 31457.280: 99.4764% ( 1) 00:15:39.263 37415.098 - 37653.411: 99.4928% ( 2) 00:15:39.263 37653.411 - 37891.724: 99.5255% ( 4) 00:15:39.263 37891.724 - 38130.036: 99.5991% ( 9) 00:15:39.263 38130.036 - 38368.349: 99.6482% ( 6) 00:15:39.263 38368.349 - 38606.662: 99.6973% ( 6) 00:15:39.263 38606.662 - 38844.975: 99.7464% ( 6) 00:15:39.263 38844.975 - 39083.287: 99.7955% ( 6) 00:15:39.263 39083.287 - 39321.600: 99.8527% ( 7) 00:15:39.263 39321.600 - 39559.913: 99.9018% ( 6) 00:15:39.263 39559.913 - 39798.225: 99.9509% ( 6) 00:15:39.263 39798.225 - 40036.538: 100.0000% ( 6) 00:15:39.263 00:15:39.263 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:15:39.263 ============================================================================== 00:15:39.263 Range in us Cumulative IO count 00:15:39.263 8698.415 - 8757.993: 0.0327% ( 4) 00:15:39.263 8757.993 - 8817.571: 0.0736% ( 5) 00:15:39.263 8817.571 - 8877.149: 0.1554% ( 10) 00:15:39.263 8877.149 - 8936.727: 0.4990% ( 42) 00:15:39.263 8936.727 - 8996.305: 1.0635% ( 69) 00:15:39.263 8996.305 - 9055.884: 1.6770% ( 75) 00:15:39.263 9055.884 - 9115.462: 2.7569% ( 132) 00:15:39.263 9115.462 - 9175.040: 3.5422% ( 96) 00:15:39.263 9175.040 - 9234.618: 5.0720% ( 187) 00:15:39.263 9234.618 - 9294.196: 6.6099% ( 188) 00:15:39.263 9294.196 - 9353.775: 8.6715% ( 252) 00:15:39.263 9353.775 - 9413.353: 11.0438% ( 290) 00:15:39.263 9413.353 - 9472.931: 14.1279% ( 377) 00:15:39.263 9472.931 - 9532.509: 17.7438% ( 442) 00:15:39.263 9532.509 - 9592.087: 21.2696% ( 431) 00:15:39.263 9592.087 - 9651.665: 24.9673% ( 452) 00:15:39.263 9651.665 - 9711.244: 28.6486% ( 450) 00:15:39.263 9711.244 - 9770.822: 32.4853% ( 469) 00:15:39.263 9770.822 - 9830.400: 36.1829% ( 452) 00:15:39.263 9830.400 - 9889.978: 39.9296% ( 458) 00:15:39.263 9889.978 - 9949.556: 43.4637% ( 432) 00:15:39.263 9949.556 - 10009.135: 46.7196% ( 398) 00:15:39.263 10009.135 - 10068.713: 49.8037% ( 377) 00:15:39.263 10068.713 - 10128.291: 53.3050% ( 428) 00:15:39.263 10128.291 - 10187.869: 56.3727% ( 375) 00:15:39.263 10187.869 - 10247.447: 59.1378% ( 338) 00:15:39.263 10247.447 - 10307.025: 62.0991% ( 362) 00:15:39.263 10307.025 - 10366.604: 64.6024% ( 306) 00:15:39.263 10366.604 - 10426.182: 67.1957% ( 317) 00:15:39.263 10426.182 - 10485.760: 69.2490% ( 251) 00:15:39.263 10485.760 - 10545.338: 71.2615% ( 246) 00:15:39.263 10545.338 - 10604.916: 73.6420% ( 291) 00:15:39.263 10604.916 - 10664.495: 75.5890% ( 238) 00:15:39.263 10664.495 - 10724.073: 77.3887% ( 220) 00:15:39.263 10724.073 - 10783.651: 78.6158% ( 150) 00:15:39.263 10783.651 - 10843.229: 79.5157% ( 110) 00:15:39.263 10843.229 - 10902.807: 80.6037% ( 133) 00:15:39.263 10902.807 - 10962.385: 81.4954% ( 109) 00:15:39.263 10962.385 - 11021.964: 82.2726% ( 95) 00:15:39.263 11021.964 - 11081.542: 83.0088% ( 90) 00:15:39.263 11081.542 - 11141.120: 83.9251% ( 112) 00:15:39.263 11141.120 - 11200.698: 84.6122% ( 84) 00:15:39.263 11200.698 - 11260.276: 85.2094% ( 73) 00:15:39.263 11260.276 - 11319.855: 85.6839% ( 58) 00:15:39.263 11319.855 - 11379.433: 86.2320% ( 67) 00:15:39.263 11379.433 - 11439.011: 86.9519% ( 88) 00:15:39.263 11439.011 - 11498.589: 87.9908% ( 127) 00:15:39.263 11498.589 - 11558.167: 88.7925% ( 98) 00:15:39.263 11558.167 - 11617.745: 89.9051% ( 136) 00:15:39.263 11617.745 - 11677.324: 90.8786% ( 119) 00:15:39.263 11677.324 - 11736.902: 91.9094% ( 126) 00:15:39.263 11736.902 - 11796.480: 93.1037% ( 146) 00:15:39.263 11796.480 - 11856.058: 94.1427% ( 127) 00:15:39.263 11856.058 - 11915.636: 94.9035% ( 93) 00:15:39.263 11915.636 - 11975.215: 95.5497% ( 79) 00:15:39.263 11975.215 - 12034.793: 96.0079% ( 56) 00:15:39.263 12034.793 - 12094.371: 96.4414% ( 53) 00:15:39.263 12094.371 - 12153.949: 96.8096% ( 45) 00:15:39.263 12153.949 - 12213.527: 97.1041% ( 36) 00:15:39.263 12213.527 - 12273.105: 97.3249% ( 27) 00:15:39.263 12273.105 - 12332.684: 97.5131% ( 23) 00:15:39.263 12332.684 - 12392.262: 97.6113% ( 12) 00:15:39.263 12392.262 - 12451.840: 97.7094% ( 12) 00:15:39.263 12451.840 - 12511.418: 97.7749% ( 8) 00:15:39.263 12511.418 - 12570.996: 97.8240% ( 6) 00:15:39.263 12570.996 - 12630.575: 97.8567% ( 4) 00:15:39.263 12630.575 - 12690.153: 97.8730% ( 2) 00:15:39.263 12690.153 - 12749.731: 97.8894% ( 2) 00:15:39.263 12749.731 - 12809.309: 97.9058% ( 2) 00:15:39.263 13107.200 - 13166.778: 97.9303% ( 3) 00:15:39.263 13166.778 - 13226.356: 98.0203% ( 11) 00:15:39.263 13226.356 - 13285.935: 98.1185% ( 12) 00:15:39.263 13285.935 - 13345.513: 98.3721% ( 31) 00:15:39.263 13345.513 - 13405.091: 98.5275% ( 19) 00:15:39.263 13405.091 - 13464.669: 98.6093% ( 10) 00:15:39.263 13464.669 - 13524.247: 98.6747% ( 8) 00:15:39.263 13524.247 - 13583.825: 98.7320% ( 7) 00:15:39.263 13583.825 - 13643.404: 98.7729% ( 5) 00:15:39.263 13643.404 - 13702.982: 98.8220% ( 6) 00:15:39.263 13702.982 - 13762.560: 98.8465% ( 3) 00:15:39.263 13762.560 - 13822.138: 98.8629% ( 2) 00:15:39.263 13822.138 - 13881.716: 98.8793% ( 2) 00:15:39.263 13881.716 - 13941.295: 98.8956% ( 2) 00:15:39.263 13941.295 - 14000.873: 98.9120% ( 2) 00:15:39.263 14000.873 - 14060.451: 98.9365% ( 3) 00:15:39.263 14060.451 - 14120.029: 98.9529% ( 2) 00:15:39.263 27882.589 - 28001.745: 98.9611% ( 1) 00:15:39.263 28001.745 - 28120.902: 98.9938% ( 4) 00:15:39.263 28120.902 - 28240.058: 99.0183% ( 3) 00:15:39.263 28240.058 - 28359.215: 99.0510% ( 4) 00:15:39.263 28359.215 - 28478.371: 99.0756% ( 3) 00:15:39.263 28478.371 - 28597.527: 99.1001% ( 3) 00:15:39.263 28597.527 - 28716.684: 99.1329% ( 4) 00:15:39.263 28716.684 - 28835.840: 99.1574% ( 3) 00:15:39.263 28835.840 - 28954.996: 99.1901% ( 4) 00:15:39.263 28954.996 - 29074.153: 99.2147% ( 3) 00:15:39.263 29074.153 - 29193.309: 99.2474% ( 4) 00:15:39.263 29193.309 - 29312.465: 99.2719% ( 3) 00:15:39.263 29312.465 - 29431.622: 99.3046% ( 4) 00:15:39.263 29431.622 - 29550.778: 99.3292% ( 3) 00:15:39.263 29550.778 - 29669.935: 99.3619% ( 4) 00:15:39.263 29669.935 - 29789.091: 99.3865% ( 3) 00:15:39.263 29789.091 - 29908.247: 99.4192% ( 4) 00:15:39.263 29908.247 - 30027.404: 99.4519% ( 4) 00:15:39.263 30027.404 - 30146.560: 99.4764% ( 3) 00:15:39.263 35270.284 - 35508.596: 99.4928% ( 2) 00:15:39.263 35508.596 - 35746.909: 99.5501% ( 7) 00:15:39.263 35746.909 - 35985.222: 99.5991% ( 6) 00:15:39.263 35985.222 - 36223.535: 99.6401% ( 5) 00:15:39.263 36223.535 - 36461.847: 99.6973% ( 7) 00:15:39.263 36461.847 - 36700.160: 99.7628% ( 8) 00:15:39.263 36700.160 - 36938.473: 99.8282% ( 8) 00:15:39.263 36938.473 - 37176.785: 99.8855% ( 7) 00:15:39.263 37176.785 - 37415.098: 99.9509% ( 8) 00:15:39.263 37415.098 - 37653.411: 100.0000% ( 6) 00:15:39.263 00:15:39.263 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:15:39.263 ============================================================================== 00:15:39.263 Range in us Cumulative IO count 00:15:39.263 8519.680 - 8579.258: 0.0082% ( 1) 00:15:39.263 8579.258 - 8638.836: 0.0245% ( 2) 00:15:39.263 8638.836 - 8698.415: 0.0736% ( 6) 00:15:39.263 8698.415 - 8757.993: 0.1309% ( 7) 00:15:39.263 8757.993 - 8817.571: 0.2045% ( 9) 00:15:39.263 8817.571 - 8877.149: 0.2700% ( 8) 00:15:39.263 8877.149 - 8936.727: 0.4090% ( 17) 00:15:39.263 8936.727 - 8996.305: 0.6790% ( 33) 00:15:39.263 8996.305 - 9055.884: 0.9735% ( 36) 00:15:39.263 9055.884 - 9115.462: 1.6525% ( 83) 00:15:39.263 9115.462 - 9175.040: 2.6342% ( 120) 00:15:39.263 9175.040 - 9234.618: 4.3194% ( 206) 00:15:39.263 9234.618 - 9294.196: 6.2336% ( 234) 00:15:39.263 9294.196 - 9353.775: 8.7533% ( 308) 00:15:39.263 9353.775 - 9413.353: 11.1584% ( 294) 00:15:39.263 9413.353 - 9472.931: 14.1934% ( 371) 00:15:39.263 9472.931 - 9532.509: 17.6129% ( 418) 00:15:39.263 9532.509 - 9592.087: 20.6643% ( 373) 00:15:39.263 9592.087 - 9651.665: 23.7075% ( 372) 00:15:39.263 9651.665 - 9711.244: 27.2742% ( 436) 00:15:39.263 9711.244 - 9770.822: 30.3829% ( 380) 00:15:39.263 9770.822 - 9830.400: 33.2952% ( 356) 00:15:39.263 9830.400 - 9889.978: 36.8701% ( 437) 00:15:39.263 9889.978 - 9949.556: 41.0913% ( 516) 00:15:39.263 9949.556 - 10009.135: 45.4598% ( 534) 00:15:39.263 10009.135 - 10068.713: 49.5501% ( 500) 00:15:39.263 10068.713 - 10128.291: 53.4113% ( 472) 00:15:39.263 10128.291 - 10187.869: 57.2562% ( 470) 00:15:39.263 10187.869 - 10247.447: 60.2912% ( 371) 00:15:39.263 10247.447 - 10307.025: 63.7026% ( 417) 00:15:39.263 10307.025 - 10366.604: 66.5085% ( 343) 00:15:39.263 10366.604 - 10426.182: 69.0854% ( 315) 00:15:39.263 10426.182 - 10485.760: 71.0897% ( 245) 00:15:39.263 10485.760 - 10545.338: 73.0694% ( 242) 00:15:39.264 10545.338 - 10604.916: 74.5991% ( 187) 00:15:39.264 10604.916 - 10664.495: 76.1207% ( 186) 00:15:39.264 10664.495 - 10724.073: 77.4378% ( 161) 00:15:39.264 10724.073 - 10783.651: 78.6404% ( 147) 00:15:39.264 10783.651 - 10843.229: 79.6711% ( 126) 00:15:39.264 10843.229 - 10902.807: 81.1764% ( 184) 00:15:39.264 10902.807 - 10962.385: 82.4280% ( 153) 00:15:39.264 10962.385 - 11021.964: 83.2215% ( 97) 00:15:39.264 11021.964 - 11081.542: 83.9905% ( 94) 00:15:39.264 11081.542 - 11141.120: 84.8740% ( 108) 00:15:39.264 11141.120 - 11200.698: 85.6021% ( 89) 00:15:39.264 11200.698 - 11260.276: 86.1666% ( 69) 00:15:39.264 11260.276 - 11319.855: 86.7392% ( 70) 00:15:39.264 11319.855 - 11379.433: 87.3282% ( 72) 00:15:39.264 11379.433 - 11439.011: 88.0972% ( 94) 00:15:39.264 11439.011 - 11498.589: 88.9152% ( 100) 00:15:39.264 11498.589 - 11558.167: 89.7333% ( 100) 00:15:39.264 11558.167 - 11617.745: 90.7559% ( 125) 00:15:39.264 11617.745 - 11677.324: 91.7212% ( 118) 00:15:39.264 11677.324 - 11736.902: 92.7847% ( 130) 00:15:39.264 11736.902 - 11796.480: 93.7418% ( 117) 00:15:39.264 11796.480 - 11856.058: 94.4290% ( 84) 00:15:39.264 11856.058 - 11915.636: 95.4434% ( 124) 00:15:39.264 11915.636 - 11975.215: 96.0079% ( 69) 00:15:39.264 11975.215 - 12034.793: 96.4905% ( 59) 00:15:39.264 12034.793 - 12094.371: 96.8914% ( 49) 00:15:39.264 12094.371 - 12153.949: 97.2186% ( 40) 00:15:39.264 12153.949 - 12213.527: 97.4476% ( 28) 00:15:39.264 12213.527 - 12273.105: 97.6358% ( 23) 00:15:39.264 12273.105 - 12332.684: 97.7258% ( 11) 00:15:39.264 12332.684 - 12392.262: 97.8158% ( 11) 00:15:39.264 12392.262 - 12451.840: 97.8812% ( 8) 00:15:39.264 12451.840 - 12511.418: 97.9058% ( 3) 00:15:39.264 12749.731 - 12809.309: 97.9548% ( 6) 00:15:39.264 12809.309 - 12868.887: 98.0203% ( 8) 00:15:39.264 12868.887 - 12928.465: 98.0857% ( 8) 00:15:39.264 12928.465 - 12988.044: 98.1430% ( 7) 00:15:39.264 12988.044 - 13047.622: 98.2575% ( 14) 00:15:39.264 13047.622 - 13107.200: 98.3639% ( 13) 00:15:39.264 13107.200 - 13166.778: 98.5029% ( 17) 00:15:39.264 13166.778 - 13226.356: 98.6093% ( 13) 00:15:39.264 13226.356 - 13285.935: 98.6666% ( 7) 00:15:39.264 13285.935 - 13345.513: 98.7320% ( 8) 00:15:39.264 13345.513 - 13405.091: 98.7729% ( 5) 00:15:39.264 13405.091 - 13464.669: 98.8138% ( 5) 00:15:39.264 13464.669 - 13524.247: 98.8384% ( 3) 00:15:39.264 13524.247 - 13583.825: 98.8793% ( 5) 00:15:39.264 13583.825 - 13643.404: 98.9120% ( 4) 00:15:39.264 13643.404 - 13702.982: 98.9365% ( 3) 00:15:39.264 13702.982 - 13762.560: 98.9529% ( 2) 00:15:39.264 26571.869 - 26691.025: 98.9611% ( 1) 00:15:39.264 26691.025 - 26810.182: 98.9774% ( 2) 00:15:39.264 26810.182 - 26929.338: 99.0020% ( 3) 00:15:39.264 26929.338 - 27048.495: 99.0265% ( 3) 00:15:39.264 27048.495 - 27167.651: 99.0592% ( 4) 00:15:39.264 27167.651 - 27286.807: 99.0838% ( 3) 00:15:39.264 27286.807 - 27405.964: 99.1083% ( 3) 00:15:39.264 27405.964 - 27525.120: 99.1247% ( 2) 00:15:39.264 27525.120 - 27644.276: 99.1492% ( 3) 00:15:39.264 27644.276 - 27763.433: 99.1819% ( 4) 00:15:39.264 27763.433 - 27882.589: 99.1901% ( 1) 00:15:39.264 27882.589 - 28001.745: 99.2147% ( 3) 00:15:39.264 28001.745 - 28120.902: 99.2392% ( 3) 00:15:39.264 28120.902 - 28240.058: 99.2556% ( 2) 00:15:39.264 28240.058 - 28359.215: 99.2883% ( 4) 00:15:39.264 28359.215 - 28478.371: 99.3128% ( 3) 00:15:39.264 28478.371 - 28597.527: 99.3455% ( 4) 00:15:39.264 28597.527 - 28716.684: 99.3783% ( 4) 00:15:39.264 28716.684 - 28835.840: 99.4028% ( 3) 00:15:39.264 28835.840 - 28954.996: 99.4355% ( 4) 00:15:39.264 28954.996 - 29074.153: 99.4683% ( 4) 00:15:39.264 29074.153 - 29193.309: 99.4764% ( 1) 00:15:39.264 34555.345 - 34793.658: 99.5255% ( 6) 00:15:39.264 34793.658 - 35031.971: 99.5828% ( 7) 00:15:39.264 35031.971 - 35270.284: 99.6319% ( 6) 00:15:39.264 35270.284 - 35508.596: 99.6891% ( 7) 00:15:39.264 35508.596 - 35746.909: 99.7546% ( 8) 00:15:39.264 35746.909 - 35985.222: 99.8118% ( 7) 00:15:39.264 35985.222 - 36223.535: 99.8691% ( 7) 00:15:39.264 36223.535 - 36461.847: 99.9264% ( 7) 00:15:39.264 36461.847 - 36700.160: 99.9836% ( 7) 00:15:39.264 36700.160 - 36938.473: 100.0000% ( 2) 00:15:39.264 00:15:39.264 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:15:39.264 ============================================================================== 00:15:39.264 Range in us Cumulative IO count 00:15:39.264 8638.836 - 8698.415: 0.0082% ( 1) 00:15:39.264 8757.993 - 8817.571: 0.0164% ( 1) 00:15:39.264 8817.571 - 8877.149: 0.0409% ( 3) 00:15:39.264 8877.149 - 8936.727: 0.1473% ( 13) 00:15:39.264 8936.727 - 8996.305: 0.3436% ( 24) 00:15:39.264 8996.305 - 9055.884: 0.6790% ( 41) 00:15:39.264 9055.884 - 9115.462: 1.1616% ( 59) 00:15:39.264 9115.462 - 9175.040: 2.2579% ( 134) 00:15:39.264 9175.040 - 9234.618: 3.5668% ( 160) 00:15:39.264 9234.618 - 9294.196: 5.3092% ( 213) 00:15:39.264 9294.196 - 9353.775: 7.7307% ( 296) 00:15:39.264 9353.775 - 9413.353: 10.5039% ( 339) 00:15:39.264 9413.353 - 9472.931: 14.0543% ( 434) 00:15:39.264 9472.931 - 9532.509: 16.9912% ( 359) 00:15:39.264 9532.509 - 9592.087: 20.5825% ( 439) 00:15:39.264 9592.087 - 9651.665: 24.2228% ( 445) 00:15:39.264 9651.665 - 9711.244: 27.8387% ( 442) 00:15:39.264 9711.244 - 9770.822: 31.6590% ( 467) 00:15:39.264 9770.822 - 9830.400: 35.2912% ( 444) 00:15:39.264 9830.400 - 9889.978: 38.9725% ( 450) 00:15:39.264 9889.978 - 9949.556: 42.3020% ( 407) 00:15:39.264 9949.556 - 10009.135: 46.1469% ( 470) 00:15:39.264 10009.135 - 10068.713: 49.8446% ( 452) 00:15:39.264 10068.713 - 10128.291: 53.4113% ( 436) 00:15:39.264 10128.291 - 10187.869: 56.0946% ( 328) 00:15:39.264 10187.869 - 10247.447: 59.3341% ( 396) 00:15:39.264 10247.447 - 10307.025: 62.2546% ( 357) 00:15:39.264 10307.025 - 10366.604: 65.2241% ( 363) 00:15:39.264 10366.604 - 10426.182: 67.6702% ( 299) 00:15:39.264 10426.182 - 10485.760: 70.2062% ( 310) 00:15:39.264 10485.760 - 10545.338: 72.2922% ( 255) 00:15:39.264 10545.338 - 10604.916: 74.1165% ( 223) 00:15:39.264 10604.916 - 10664.495: 76.0389% ( 235) 00:15:39.264 10664.495 - 10724.073: 77.6342% ( 195) 00:15:39.264 10724.073 - 10783.651: 78.9676% ( 163) 00:15:39.264 10783.651 - 10843.229: 80.2601% ( 158) 00:15:39.264 10843.229 - 10902.807: 81.3073% ( 128) 00:15:39.264 10902.807 - 10962.385: 82.4116% ( 135) 00:15:39.264 10962.385 - 11021.964: 83.5079% ( 134) 00:15:39.264 11021.964 - 11081.542: 84.1378% ( 77) 00:15:39.264 11081.542 - 11141.120: 84.9395% ( 98) 00:15:39.264 11141.120 - 11200.698: 85.5121% ( 70) 00:15:39.264 11200.698 - 11260.276: 86.0602% ( 67) 00:15:39.264 11260.276 - 11319.855: 86.8210% ( 93) 00:15:39.264 11319.855 - 11379.433: 87.5409% ( 88) 00:15:39.264 11379.433 - 11439.011: 88.2772% ( 90) 00:15:39.264 11439.011 - 11498.589: 89.3161% ( 127) 00:15:39.264 11498.589 - 11558.167: 90.2405% ( 113) 00:15:39.264 11558.167 - 11617.745: 91.2385% ( 122) 00:15:39.264 11617.745 - 11677.324: 92.3511% ( 136) 00:15:39.264 11677.324 - 11736.902: 93.2346% ( 108) 00:15:39.264 11736.902 - 11796.480: 94.0609% ( 101) 00:15:39.264 11796.480 - 11856.058: 94.7562% ( 85) 00:15:39.264 11856.058 - 11915.636: 95.5825% ( 101) 00:15:39.264 11915.636 - 11975.215: 96.0651% ( 59) 00:15:39.264 11975.215 - 12034.793: 96.4741% ( 50) 00:15:39.264 12034.793 - 12094.371: 96.8341% ( 44) 00:15:39.264 12094.371 - 12153.949: 97.1204% ( 35) 00:15:39.264 12153.949 - 12213.527: 97.3658% ( 30) 00:15:39.264 12213.527 - 12273.105: 97.5213% ( 19) 00:15:39.264 12273.105 - 12332.684: 97.6522% ( 16) 00:15:39.264 12332.684 - 12392.262: 97.8076% ( 19) 00:15:39.264 12392.262 - 12451.840: 97.9385% ( 16) 00:15:39.264 12451.840 - 12511.418: 97.9957% ( 7) 00:15:39.264 12511.418 - 12570.996: 98.0285% ( 4) 00:15:39.264 12570.996 - 12630.575: 98.0694% ( 5) 00:15:39.264 12630.575 - 12690.153: 98.1266% ( 7) 00:15:39.264 12690.153 - 12749.731: 98.1675% ( 5) 00:15:39.264 12749.731 - 12809.309: 98.2330% ( 8) 00:15:39.264 12809.309 - 12868.887: 98.2575% ( 3) 00:15:39.264 12868.887 - 12928.465: 98.2739% ( 2) 00:15:39.264 12928.465 - 12988.044: 98.2984% ( 3) 00:15:39.264 12988.044 - 13047.622: 98.3148% ( 2) 00:15:39.264 13047.622 - 13107.200: 98.3557% ( 5) 00:15:39.264 13107.200 - 13166.778: 98.4048% ( 6) 00:15:39.264 13166.778 - 13226.356: 98.4620% ( 7) 00:15:39.264 13226.356 - 13285.935: 98.5357% ( 9) 00:15:39.264 13285.935 - 13345.513: 98.7729% ( 29) 00:15:39.264 13345.513 - 13405.091: 98.8465% ( 9) 00:15:39.264 13405.091 - 13464.669: 98.8711% ( 3) 00:15:39.264 13464.669 - 13524.247: 98.8874% ( 2) 00:15:39.264 13524.247 - 13583.825: 98.9120% ( 3) 00:15:39.264 13583.825 - 13643.404: 98.9365% ( 3) 00:15:39.264 13643.404 - 13702.982: 98.9529% ( 2) 00:15:39.264 24546.211 - 24665.367: 98.9692% ( 2) 00:15:39.264 24665.367 - 24784.524: 98.9938% ( 3) 00:15:39.264 24784.524 - 24903.680: 99.0265% ( 4) 00:15:39.264 24903.680 - 25022.836: 99.0510% ( 3) 00:15:39.264 25022.836 - 25141.993: 99.0838% ( 4) 00:15:39.264 25141.993 - 25261.149: 99.1165% ( 4) 00:15:39.264 25261.149 - 25380.305: 99.1492% ( 4) 00:15:39.264 25380.305 - 25499.462: 99.1738% ( 3) 00:15:39.264 25499.462 - 25618.618: 99.2065% ( 4) 00:15:39.264 25618.618 - 25737.775: 99.2310% ( 3) 00:15:39.264 25737.775 - 25856.931: 99.2637% ( 4) 00:15:39.264 25856.931 - 25976.087: 99.2883% ( 3) 00:15:39.264 25976.087 - 26095.244: 99.3128% ( 3) 00:15:39.264 26095.244 - 26214.400: 99.3455% ( 4) 00:15:39.264 26214.400 - 26333.556: 99.3701% ( 3) 00:15:39.264 26333.556 - 26452.713: 99.3946% ( 3) 00:15:39.264 26452.713 - 26571.869: 99.4274% ( 4) 00:15:39.265 26571.869 - 26691.025: 99.4601% ( 4) 00:15:39.265 26691.025 - 26810.182: 99.4764% ( 2) 00:15:39.265 32648.844 - 32887.156: 99.5255% ( 6) 00:15:39.265 32887.156 - 33125.469: 99.5910% ( 8) 00:15:39.265 33125.469 - 33363.782: 99.6401% ( 6) 00:15:39.265 33363.782 - 33602.095: 99.6973% ( 7) 00:15:39.265 33602.095 - 33840.407: 99.7546% ( 7) 00:15:39.265 33840.407 - 34078.720: 99.8118% ( 7) 00:15:39.265 34078.720 - 34317.033: 99.8691% ( 7) 00:15:39.265 34317.033 - 34555.345: 99.9264% ( 7) 00:15:39.265 34555.345 - 34793.658: 99.9836% ( 7) 00:15:39.265 34793.658 - 35031.971: 100.0000% ( 2) 00:15:39.265 00:15:39.265 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:15:39.265 ============================================================================== 00:15:39.265 Range in us Cumulative IO count 00:15:39.265 8579.258 - 8638.836: 0.0082% ( 1) 00:15:39.265 8638.836 - 8698.415: 0.0164% ( 1) 00:15:39.265 8757.993 - 8817.571: 0.0736% ( 7) 00:15:39.265 8817.571 - 8877.149: 0.2045% ( 16) 00:15:39.265 8877.149 - 8936.727: 0.4336% ( 28) 00:15:39.265 8936.727 - 8996.305: 0.7690% ( 41) 00:15:39.265 8996.305 - 9055.884: 1.3498% ( 71) 00:15:39.265 9055.884 - 9115.462: 2.1842% ( 102) 00:15:39.265 9115.462 - 9175.040: 3.0677% ( 108) 00:15:39.265 9175.040 - 9234.618: 4.2539% ( 145) 00:15:39.265 9234.618 - 9294.196: 5.7755% ( 186) 00:15:39.265 9294.196 - 9353.775: 7.9761% ( 269) 00:15:39.265 9353.775 - 9413.353: 10.6103% ( 322) 00:15:39.265 9413.353 - 9472.931: 13.6698% ( 374) 00:15:39.265 9472.931 - 9532.509: 17.2611% ( 439) 00:15:39.265 9532.509 - 9592.087: 20.6315% ( 412) 00:15:39.265 9592.087 - 9651.665: 24.3128% ( 450) 00:15:39.265 9651.665 - 9711.244: 28.0268% ( 454) 00:15:39.265 9711.244 - 9770.822: 31.5445% ( 430) 00:15:39.265 9770.822 - 9830.400: 34.8004% ( 398) 00:15:39.265 9830.400 - 9889.978: 38.2772% ( 425) 00:15:39.265 9889.978 - 9949.556: 42.0648% ( 463) 00:15:39.265 9949.556 - 10009.135: 45.8115% ( 458) 00:15:39.265 10009.135 - 10068.713: 50.0818% ( 522) 00:15:39.265 10068.713 - 10128.291: 53.3704% ( 402) 00:15:39.265 10128.291 - 10187.869: 56.1355% ( 338) 00:15:39.265 10187.869 - 10247.447: 59.4323% ( 403) 00:15:39.265 10247.447 - 10307.025: 61.9846% ( 312) 00:15:39.265 10307.025 - 10366.604: 64.5533% ( 314) 00:15:39.265 10366.604 - 10426.182: 66.6885% ( 261) 00:15:39.265 10426.182 - 10485.760: 68.7827% ( 256) 00:15:39.265 10485.760 - 10545.338: 71.3514% ( 314) 00:15:39.265 10545.338 - 10604.916: 73.6993% ( 287) 00:15:39.265 10604.916 - 10664.495: 75.5236% ( 223) 00:15:39.265 10664.495 - 10724.073: 77.4215% ( 232) 00:15:39.265 10724.073 - 10783.651: 78.8285% ( 172) 00:15:39.265 10783.651 - 10843.229: 80.2765% ( 177) 00:15:39.265 10843.229 - 10902.807: 81.2418% ( 118) 00:15:39.265 10902.807 - 10962.385: 82.3626% ( 137) 00:15:39.265 10962.385 - 11021.964: 83.2870% ( 113) 00:15:39.265 11021.964 - 11081.542: 84.1378% ( 104) 00:15:39.265 11081.542 - 11141.120: 84.7840% ( 79) 00:15:39.265 11141.120 - 11200.698: 85.5203% ( 90) 00:15:39.265 11200.698 - 11260.276: 86.1993% ( 83) 00:15:39.265 11260.276 - 11319.855: 86.9110% ( 87) 00:15:39.265 11319.855 - 11379.433: 87.5082% ( 73) 00:15:39.265 11379.433 - 11439.011: 88.3262% ( 100) 00:15:39.265 11439.011 - 11498.589: 89.2261% ( 110) 00:15:39.265 11498.589 - 11558.167: 90.3141% ( 133) 00:15:39.265 11558.167 - 11617.745: 91.2958% ( 120) 00:15:39.265 11617.745 - 11677.324: 92.1384% ( 103) 00:15:39.265 11677.324 - 11736.902: 93.1446% ( 123) 00:15:39.265 11736.902 - 11796.480: 93.8563% ( 87) 00:15:39.265 11796.480 - 11856.058: 94.7235% ( 106) 00:15:39.265 11856.058 - 11915.636: 95.4679% ( 91) 00:15:39.265 11915.636 - 11975.215: 96.0488% ( 71) 00:15:39.265 11975.215 - 12034.793: 96.5069% ( 56) 00:15:39.265 12034.793 - 12094.371: 96.8995% ( 48) 00:15:39.265 12094.371 - 12153.949: 97.1613% ( 32) 00:15:39.265 12153.949 - 12213.527: 97.4395% ( 34) 00:15:39.265 12213.527 - 12273.105: 97.6767% ( 29) 00:15:39.265 12273.105 - 12332.684: 97.8485% ( 21) 00:15:39.265 12332.684 - 12392.262: 97.9385% ( 11) 00:15:39.265 12392.262 - 12451.840: 98.0366% ( 12) 00:15:39.265 12451.840 - 12511.418: 98.2493% ( 26) 00:15:39.265 12511.418 - 12570.996: 98.3312% ( 10) 00:15:39.265 12570.996 - 12630.575: 98.3475% ( 2) 00:15:39.265 12630.575 - 12690.153: 98.3639% ( 2) 00:15:39.265 12690.153 - 12749.731: 98.3802% ( 2) 00:15:39.265 12749.731 - 12809.309: 98.4048% ( 3) 00:15:39.265 12809.309 - 12868.887: 98.4211% ( 2) 00:15:39.265 12868.887 - 12928.465: 98.4293% ( 1) 00:15:39.265 13047.622 - 13107.200: 98.4539% ( 3) 00:15:39.265 13107.200 - 13166.778: 98.4702% ( 2) 00:15:39.265 13166.778 - 13226.356: 98.5029% ( 4) 00:15:39.265 13226.356 - 13285.935: 98.5520% ( 6) 00:15:39.265 13285.935 - 13345.513: 98.6338% ( 10) 00:15:39.265 13345.513 - 13405.091: 98.8138% ( 22) 00:15:39.265 13405.091 - 13464.669: 98.8465% ( 4) 00:15:39.265 13464.669 - 13524.247: 98.8629% ( 2) 00:15:39.265 13524.247 - 13583.825: 98.8874% ( 3) 00:15:39.265 13583.825 - 13643.404: 98.9120% ( 3) 00:15:39.265 13643.404 - 13702.982: 98.9365% ( 3) 00:15:39.265 13702.982 - 13762.560: 98.9529% ( 2) 00:15:39.265 22282.240 - 22401.396: 98.9611% ( 1) 00:15:39.265 22639.709 - 22758.865: 98.9692% ( 1) 00:15:39.265 22997.178 - 23116.335: 99.0183% ( 6) 00:15:39.265 23116.335 - 23235.491: 99.1574% ( 17) 00:15:39.265 23235.491 - 23354.647: 99.1819% ( 3) 00:15:39.265 23354.647 - 23473.804: 99.2065% ( 3) 00:15:39.265 23473.804 - 23592.960: 99.2310% ( 3) 00:15:39.265 23592.960 - 23712.116: 99.2556% ( 3) 00:15:39.265 23712.116 - 23831.273: 99.2801% ( 3) 00:15:39.265 23831.273 - 23950.429: 99.3046% ( 3) 00:15:39.265 23950.429 - 24069.585: 99.3210% ( 2) 00:15:39.265 24069.585 - 24188.742: 99.3455% ( 3) 00:15:39.265 24188.742 - 24307.898: 99.3701% ( 3) 00:15:39.265 24307.898 - 24427.055: 99.3946% ( 3) 00:15:39.265 24427.055 - 24546.211: 99.4192% ( 3) 00:15:39.265 24546.211 - 24665.367: 99.4355% ( 2) 00:15:39.265 24665.367 - 24784.524: 99.4601% ( 3) 00:15:39.265 24784.524 - 24903.680: 99.4764% ( 2) 00:15:39.265 29550.778 - 29669.935: 99.6319% ( 19) 00:15:39.265 31218.967 - 31457.280: 99.6810% ( 6) 00:15:39.265 31457.280 - 31695.593: 99.7300% ( 6) 00:15:39.265 31695.593 - 31933.905: 99.7791% ( 6) 00:15:39.265 31933.905 - 32172.218: 99.8282% ( 6) 00:15:39.265 32172.218 - 32410.531: 99.8773% ( 6) 00:15:39.265 32410.531 - 32648.844: 99.9264% ( 6) 00:15:39.265 32648.844 - 32887.156: 99.9755% ( 6) 00:15:39.265 32887.156 - 33125.469: 100.0000% ( 3) 00:15:39.265 00:15:39.265 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:15:39.265 ============================================================================== 00:15:39.265 Range in us Cumulative IO count 00:15:39.265 8519.680 - 8579.258: 0.0082% ( 1) 00:15:39.265 8579.258 - 8638.836: 0.0164% ( 1) 00:15:39.265 8638.836 - 8698.415: 0.0327% ( 2) 00:15:39.265 8698.415 - 8757.993: 0.0573% ( 3) 00:15:39.265 8757.993 - 8817.571: 0.0900% ( 4) 00:15:39.265 8817.571 - 8877.149: 0.2209% ( 16) 00:15:39.265 8877.149 - 8936.727: 0.5317% ( 38) 00:15:39.265 8936.727 - 8996.305: 0.8753% ( 42) 00:15:39.265 8996.305 - 9055.884: 1.2598% ( 47) 00:15:39.265 9055.884 - 9115.462: 1.8488% ( 72) 00:15:39.265 9115.462 - 9175.040: 2.8550% ( 123) 00:15:39.265 9175.040 - 9234.618: 4.1558% ( 159) 00:15:39.265 9234.618 - 9294.196: 6.3073% ( 263) 00:15:39.265 9294.196 - 9353.775: 8.3933% ( 255) 00:15:39.265 9353.775 - 9413.353: 11.1257% ( 334) 00:15:39.265 9413.353 - 9472.931: 14.0625% ( 359) 00:15:39.265 9472.931 - 9532.509: 18.7009% ( 567) 00:15:39.265 9532.509 - 9592.087: 22.3740% ( 449) 00:15:39.265 9592.087 - 9651.665: 26.6934% ( 528) 00:15:39.265 9651.665 - 9711.244: 28.8858% ( 268) 00:15:39.265 9711.244 - 9770.822: 32.0190% ( 383) 00:15:39.265 9770.822 - 9830.400: 35.4303% ( 417) 00:15:39.265 9830.400 - 9889.978: 38.6126% ( 389) 00:15:39.265 9889.978 - 9949.556: 41.8930% ( 401) 00:15:39.265 9949.556 - 10009.135: 44.9771% ( 377) 00:15:39.265 10009.135 - 10068.713: 48.4293% ( 422) 00:15:39.265 10068.713 - 10128.291: 52.3724% ( 482) 00:15:39.265 10128.291 - 10187.869: 56.3973% ( 492) 00:15:39.266 10187.869 - 10247.447: 59.4650% ( 375) 00:15:39.266 10247.447 - 10307.025: 62.0010% ( 310) 00:15:39.266 10307.025 - 10366.604: 64.7088% ( 331) 00:15:39.266 10366.604 - 10426.182: 67.7356% ( 370) 00:15:39.266 10426.182 - 10485.760: 69.7644% ( 248) 00:15:39.266 10485.760 - 10545.338: 71.4823% ( 210) 00:15:39.266 10545.338 - 10604.916: 73.8138% ( 285) 00:15:39.266 10604.916 - 10664.495: 75.6790% ( 228) 00:15:39.266 10664.495 - 10724.073: 77.3806% ( 208) 00:15:39.266 10724.073 - 10783.651: 78.5831% ( 147) 00:15:39.266 10783.651 - 10843.229: 79.8020% ( 149) 00:15:39.266 10843.229 - 10902.807: 80.7755% ( 119) 00:15:39.266 10902.807 - 10962.385: 81.7736% ( 122) 00:15:39.266 10962.385 - 11021.964: 82.6652% ( 109) 00:15:39.266 11021.964 - 11081.542: 83.2297% ( 69) 00:15:39.266 11081.542 - 11141.120: 83.7205% ( 60) 00:15:39.266 11141.120 - 11200.698: 84.3096% ( 72) 00:15:39.266 11200.698 - 11260.276: 85.2667% ( 117) 00:15:39.266 11260.276 - 11319.855: 86.6901% ( 174) 00:15:39.266 11319.855 - 11379.433: 87.4591% ( 94) 00:15:39.266 11379.433 - 11439.011: 88.2608% ( 98) 00:15:39.266 11439.011 - 11498.589: 89.0134% ( 92) 00:15:39.266 11498.589 - 11558.167: 89.9787% ( 118) 00:15:39.266 11558.167 - 11617.745: 91.0013% ( 125) 00:15:39.266 11617.745 - 11677.324: 91.8030% ( 98) 00:15:39.266 11677.324 - 11736.902: 92.8747% ( 131) 00:15:39.266 11736.902 - 11796.480: 93.8073% ( 114) 00:15:39.266 11796.480 - 11856.058: 94.8053% ( 122) 00:15:39.266 11856.058 - 11915.636: 95.5497% ( 91) 00:15:39.266 11915.636 - 11975.215: 96.1060% ( 68) 00:15:39.266 11975.215 - 12034.793: 96.6296% ( 64) 00:15:39.266 12034.793 - 12094.371: 97.0059% ( 46) 00:15:39.266 12094.371 - 12153.949: 97.3740% ( 45) 00:15:39.266 12153.949 - 12213.527: 97.6276% ( 31) 00:15:39.266 12213.527 - 12273.105: 97.8567% ( 28) 00:15:39.266 12273.105 - 12332.684: 98.0857% ( 28) 00:15:39.266 12332.684 - 12392.262: 98.2248% ( 17) 00:15:39.266 12392.262 - 12451.840: 98.2821% ( 7) 00:15:39.266 12451.840 - 12511.418: 98.3148% ( 4) 00:15:39.266 12511.418 - 12570.996: 98.3557% ( 5) 00:15:39.266 12570.996 - 12630.575: 98.3721% ( 2) 00:15:39.266 12630.575 - 12690.153: 98.3884% ( 2) 00:15:39.266 12690.153 - 12749.731: 98.4048% ( 2) 00:15:39.266 12749.731 - 12809.309: 98.4211% ( 2) 00:15:39.266 12809.309 - 12868.887: 98.4293% ( 1) 00:15:39.266 12928.465 - 12988.044: 98.4375% ( 1) 00:15:39.266 13107.200 - 13166.778: 98.4784% ( 5) 00:15:39.266 13166.778 - 13226.356: 98.4948% ( 2) 00:15:39.266 13226.356 - 13285.935: 98.5357% ( 5) 00:15:39.266 13285.935 - 13345.513: 98.5684% ( 4) 00:15:39.266 13345.513 - 13405.091: 98.7729% ( 25) 00:15:39.266 13405.091 - 13464.669: 98.8465% ( 9) 00:15:39.266 13464.669 - 13524.247: 98.8711% ( 3) 00:15:39.266 13524.247 - 13583.825: 98.8874% ( 2) 00:15:39.266 13583.825 - 13643.404: 98.8956% ( 1) 00:15:39.266 13643.404 - 13702.982: 98.9202% ( 3) 00:15:39.266 13702.982 - 13762.560: 98.9365% ( 2) 00:15:39.266 13762.560 - 13822.138: 98.9529% ( 2) 00:15:39.266 20137.425 - 20256.582: 98.9611% ( 1) 00:15:39.266 20494.895 - 20614.051: 98.9856% ( 3) 00:15:39.266 20614.051 - 20733.207: 99.0429% ( 7) 00:15:39.266 20733.207 - 20852.364: 99.1247% ( 10) 00:15:39.266 20852.364 - 20971.520: 99.1738% ( 6) 00:15:39.266 20971.520 - 21090.676: 99.2065% ( 4) 00:15:39.266 21090.676 - 21209.833: 99.2310% ( 3) 00:15:39.266 21209.833 - 21328.989: 99.2556% ( 3) 00:15:39.266 21328.989 - 21448.145: 99.2801% ( 3) 00:15:39.266 21448.145 - 21567.302: 99.2965% ( 2) 00:15:39.266 21567.302 - 21686.458: 99.3292% ( 4) 00:15:39.266 21686.458 - 21805.615: 99.3537% ( 3) 00:15:39.266 21805.615 - 21924.771: 99.3783% ( 3) 00:15:39.266 21924.771 - 22043.927: 99.4028% ( 3) 00:15:39.266 22043.927 - 22163.084: 99.4274% ( 3) 00:15:39.266 22163.084 - 22282.240: 99.4601% ( 4) 00:15:39.266 22282.240 - 22401.396: 99.4764% ( 2) 00:15:39.266 27405.964 - 27525.120: 99.4928% ( 2) 00:15:39.266 27525.120 - 27644.276: 99.5337% ( 5) 00:15:39.266 27644.276 - 27763.433: 99.5501% ( 2) 00:15:39.266 28954.996 - 29074.153: 99.5746% ( 3) 00:15:39.266 29074.153 - 29193.309: 99.5991% ( 3) 00:15:39.266 29193.309 - 29312.465: 99.6237% ( 3) 00:15:39.266 29312.465 - 29431.622: 99.6564% ( 4) 00:15:39.266 29431.622 - 29550.778: 99.6810% ( 3) 00:15:39.266 29550.778 - 29669.935: 99.7055% ( 3) 00:15:39.266 29669.935 - 29789.091: 99.7300% ( 3) 00:15:39.266 29789.091 - 29908.247: 99.7546% ( 3) 00:15:39.266 29908.247 - 30027.404: 99.7873% ( 4) 00:15:39.266 30027.404 - 30146.560: 99.8118% ( 3) 00:15:39.266 30146.560 - 30265.716: 99.8446% ( 4) 00:15:39.266 30265.716 - 30384.873: 99.8691% ( 3) 00:15:39.266 30384.873 - 30504.029: 99.9018% ( 4) 00:15:39.266 30504.029 - 30742.342: 99.9591% ( 7) 00:15:39.266 30742.342 - 30980.655: 100.0000% ( 5) 00:15:39.266 00:15:39.266 17:16:25 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:15:39.266 00:15:39.266 real 0m2.715s 00:15:39.266 user 0m2.279s 00:15:39.266 sys 0m0.327s 00:15:39.266 17:16:25 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:39.266 17:16:25 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:15:39.266 ************************************ 00:15:39.266 END TEST nvme_perf 00:15:39.266 ************************************ 00:15:39.266 17:16:25 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:15:39.266 17:16:25 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:39.266 17:16:25 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:39.266 17:16:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:39.525 ************************************ 00:15:39.525 START TEST nvme_hello_world 00:15:39.525 ************************************ 00:15:39.525 17:16:25 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:15:39.783 Initializing NVMe Controllers 00:15:39.783 Attached to 0000:00:10.0 00:15:39.783 Namespace ID: 1 size: 6GB 00:15:39.783 Attached to 0000:00:11.0 00:15:39.783 Namespace ID: 1 size: 5GB 00:15:39.783 Attached to 0000:00:13.0 00:15:39.783 Namespace ID: 1 size: 1GB 00:15:39.783 Attached to 0000:00:12.0 00:15:39.783 Namespace ID: 1 size: 4GB 00:15:39.783 Namespace ID: 2 size: 4GB 00:15:39.783 Namespace ID: 3 size: 4GB 00:15:39.783 Initialization complete. 00:15:39.783 INFO: using host memory buffer for IO 00:15:39.783 Hello world! 00:15:39.783 INFO: using host memory buffer for IO 00:15:39.783 Hello world! 00:15:39.783 INFO: using host memory buffer for IO 00:15:39.783 Hello world! 00:15:39.783 INFO: using host memory buffer for IO 00:15:39.783 Hello world! 00:15:39.783 INFO: using host memory buffer for IO 00:15:39.783 Hello world! 00:15:39.783 INFO: using host memory buffer for IO 00:15:39.783 Hello world! 00:15:39.783 ************************************ 00:15:39.783 END TEST nvme_hello_world 00:15:39.783 ************************************ 00:15:39.783 00:15:39.783 real 0m0.320s 00:15:39.783 user 0m0.122s 00:15:39.783 sys 0m0.148s 00:15:39.783 17:16:25 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:39.783 17:16:25 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:39.783 17:16:25 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:15:39.783 17:16:25 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:39.783 17:16:25 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:39.783 17:16:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:39.783 ************************************ 00:15:39.783 START TEST nvme_sgl 00:15:39.783 ************************************ 00:15:39.783 17:16:25 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:15:40.042 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:15:40.042 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:15:40.042 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:15:40.042 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:15:40.042 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:15:40.042 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:15:40.042 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:15:40.042 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:15:40.042 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:15:40.042 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:15:40.042 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:15:40.042 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:15:40.042 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:15:40.042 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:15:40.042 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:15:40.042 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:15:40.042 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:15:40.042 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:15:40.042 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:15:40.042 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:15:40.042 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:15:40.042 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:15:40.042 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:15:40.042 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:15:40.042 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:15:40.042 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:15:40.042 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:15:40.042 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:15:40.042 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:15:40.042 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:15:40.042 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:15:40.042 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:15:40.042 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:15:40.042 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:15:40.042 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:15:40.042 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:15:40.042 NVMe Readv/Writev Request test 00:15:40.042 Attached to 0000:00:10.0 00:15:40.042 Attached to 0000:00:11.0 00:15:40.042 Attached to 0000:00:13.0 00:15:40.042 Attached to 0000:00:12.0 00:15:40.042 0000:00:10.0: build_io_request_2 test passed 00:15:40.042 0000:00:10.0: build_io_request_4 test passed 00:15:40.042 0000:00:10.0: build_io_request_5 test passed 00:15:40.042 0000:00:10.0: build_io_request_6 test passed 00:15:40.042 0000:00:10.0: build_io_request_7 test passed 00:15:40.042 0000:00:10.0: build_io_request_10 test passed 00:15:40.042 0000:00:11.0: build_io_request_2 test passed 00:15:40.042 0000:00:11.0: build_io_request_4 test passed 00:15:40.042 0000:00:11.0: build_io_request_5 test passed 00:15:40.042 0000:00:11.0: build_io_request_6 test passed 00:15:40.042 0000:00:11.0: build_io_request_7 test passed 00:15:40.042 0000:00:11.0: build_io_request_10 test passed 00:15:40.042 Cleaning up... 00:15:40.042 00:15:40.042 real 0m0.400s 00:15:40.042 user 0m0.191s 00:15:40.042 sys 0m0.153s 00:15:40.042 17:16:26 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:40.042 ************************************ 00:15:40.042 END TEST nvme_sgl 00:15:40.042 ************************************ 00:15:40.042 17:16:26 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:15:40.302 17:16:26 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:15:40.302 17:16:26 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:40.302 17:16:26 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:40.302 17:16:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.302 ************************************ 00:15:40.302 START TEST nvme_e2edp 00:15:40.302 ************************************ 00:15:40.302 17:16:26 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:15:40.569 NVMe Write/Read with End-to-End data protection test 00:15:40.569 Attached to 0000:00:10.0 00:15:40.569 Attached to 0000:00:11.0 00:15:40.569 Attached to 0000:00:13.0 00:15:40.569 Attached to 0000:00:12.0 00:15:40.569 Cleaning up... 00:15:40.569 00:15:40.569 real 0m0.310s 00:15:40.569 user 0m0.122s 00:15:40.569 sys 0m0.137s 00:15:40.569 ************************************ 00:15:40.569 END TEST nvme_e2edp 00:15:40.569 ************************************ 00:15:40.569 17:16:26 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:40.569 17:16:26 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:15:40.569 17:16:26 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:15:40.569 17:16:26 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:40.569 17:16:26 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:40.569 17:16:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.569 ************************************ 00:15:40.569 START TEST nvme_reserve 00:15:40.569 ************************************ 00:15:40.569 17:16:26 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:15:40.828 ===================================================== 00:15:40.828 NVMe Controller at PCI bus 0, device 16, function 0 00:15:40.828 ===================================================== 00:15:40.828 Reservations: Not Supported 00:15:40.828 ===================================================== 00:15:40.828 NVMe Controller at PCI bus 0, device 17, function 0 00:15:40.828 ===================================================== 00:15:40.828 Reservations: Not Supported 00:15:40.828 ===================================================== 00:15:40.828 NVMe Controller at PCI bus 0, device 19, function 0 00:15:40.828 ===================================================== 00:15:40.828 Reservations: Not Supported 00:15:40.828 ===================================================== 00:15:40.828 NVMe Controller at PCI bus 0, device 18, function 0 00:15:40.828 ===================================================== 00:15:40.828 Reservations: Not Supported 00:15:40.828 Reservation test passed 00:15:40.828 ************************************ 00:15:40.828 END TEST nvme_reserve 00:15:40.828 ************************************ 00:15:40.828 00:15:40.828 real 0m0.249s 00:15:40.828 user 0m0.090s 00:15:40.828 sys 0m0.111s 00:15:40.828 17:16:26 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:40.828 17:16:26 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:15:40.828 17:16:26 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:15:40.828 17:16:26 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:40.828 17:16:26 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:40.828 17:16:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.828 ************************************ 00:15:40.828 START TEST nvme_err_injection 00:15:40.828 ************************************ 00:15:40.828 17:16:26 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:15:41.086 NVMe Error Injection test 00:15:41.086 Attached to 0000:00:10.0 00:15:41.086 Attached to 0000:00:11.0 00:15:41.086 Attached to 0000:00:13.0 00:15:41.086 Attached to 0000:00:12.0 00:15:41.086 0000:00:12.0: get features failed as expected 00:15:41.086 0000:00:10.0: get features failed as expected 00:15:41.086 0000:00:11.0: get features failed as expected 00:15:41.086 0000:00:13.0: get features failed as expected 00:15:41.086 0000:00:10.0: get features successfully as expected 00:15:41.086 0000:00:11.0: get features successfully as expected 00:15:41.086 0000:00:13.0: get features successfully as expected 00:15:41.086 0000:00:12.0: get features successfully as expected 00:15:41.086 0000:00:10.0: read failed as expected 00:15:41.086 0000:00:11.0: read failed as expected 00:15:41.086 0000:00:13.0: read failed as expected 00:15:41.086 0000:00:12.0: read failed as expected 00:15:41.086 0000:00:10.0: read successfully as expected 00:15:41.086 0000:00:11.0: read successfully as expected 00:15:41.086 0000:00:13.0: read successfully as expected 00:15:41.086 0000:00:12.0: read successfully as expected 00:15:41.086 Cleaning up... 00:15:41.345 ************************************ 00:15:41.345 END TEST nvme_err_injection 00:15:41.345 ************************************ 00:15:41.345 00:15:41.345 real 0m0.353s 00:15:41.345 user 0m0.124s 00:15:41.345 sys 0m0.178s 00:15:41.345 17:16:27 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:41.345 17:16:27 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:15:41.345 17:16:27 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:15:41.345 17:16:27 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:15:41.345 17:16:27 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:41.345 17:16:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:41.345 ************************************ 00:15:41.345 START TEST nvme_overhead 00:15:41.345 ************************************ 00:15:41.345 17:16:27 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:15:42.721 Initializing NVMe Controllers 00:15:42.721 Attached to 0000:00:10.0 00:15:42.721 Attached to 0000:00:11.0 00:15:42.721 Attached to 0000:00:13.0 00:15:42.721 Attached to 0000:00:12.0 00:15:42.721 Initialization complete. Launching workers. 00:15:42.721 submit (in ns) avg, min, max = 14848.0, 11971.8, 65399.1 00:15:42.721 complete (in ns) avg, min, max = 11334.5, 8590.9, 1171060.0 00:15:42.721 00:15:42.721 Submit histogram 00:15:42.721 ================ 00:15:42.721 Range in us Cumulative Count 00:15:42.721 11.927 - 11.985: 0.0120% ( 1) 00:15:42.721 11.985 - 12.044: 0.0241% ( 1) 00:15:42.721 12.044 - 12.102: 0.0481% ( 2) 00:15:42.721 12.102 - 12.160: 0.1203% ( 6) 00:15:42.721 12.160 - 12.218: 0.2767% ( 13) 00:15:42.721 12.218 - 12.276: 0.5415% ( 22) 00:15:42.721 12.276 - 12.335: 0.9866% ( 37) 00:15:42.721 12.335 - 12.393: 1.2393% ( 21) 00:15:42.721 12.393 - 12.451: 1.5642% ( 27) 00:15:42.721 12.451 - 12.509: 2.0455% ( 40) 00:15:42.721 12.509 - 12.567: 3.0682% ( 85) 00:15:42.721 12.567 - 12.625: 4.1752% ( 92) 00:15:42.721 12.625 - 12.684: 5.5348% ( 113) 00:15:42.721 12.684 - 12.742: 6.9426% ( 117) 00:15:42.721 12.742 - 12.800: 7.9774% ( 86) 00:15:42.721 12.800 - 12.858: 9.3009% ( 110) 00:15:42.721 12.858 - 12.916: 10.8651% ( 130) 00:15:42.721 12.916 - 12.975: 12.9708% ( 175) 00:15:42.721 12.975 - 13.033: 15.2088% ( 186) 00:15:42.721 13.033 - 13.091: 17.2061% ( 166) 00:15:42.721 13.091 - 13.149: 18.8184% ( 134) 00:15:42.721 13.149 - 13.207: 20.5029% ( 140) 00:15:42.721 13.207 - 13.265: 22.3679% ( 155) 00:15:42.721 13.265 - 13.324: 25.9897% ( 301) 00:15:42.721 13.324 - 13.382: 31.6689% ( 472) 00:15:42.721 13.382 - 13.440: 38.2625% ( 548) 00:15:42.721 13.440 - 13.498: 46.2399% ( 663) 00:15:42.721 13.498 - 13.556: 53.0261% ( 564) 00:15:42.721 13.556 - 13.615: 57.8270% ( 399) 00:15:42.721 13.615 - 13.673: 61.2923% ( 288) 00:15:42.721 13.673 - 13.731: 63.4942% ( 183) 00:15:42.721 13.731 - 13.789: 65.4434% ( 162) 00:15:42.721 13.789 - 13.847: 67.1881% ( 145) 00:15:42.721 13.847 - 13.905: 68.6560% ( 122) 00:15:42.721 13.905 - 13.964: 69.9916% ( 111) 00:15:42.721 13.964 - 14.022: 70.7737% ( 65) 00:15:42.721 14.022 - 14.080: 71.4595% ( 57) 00:15:42.721 14.080 - 14.138: 72.1694% ( 59) 00:15:42.721 14.138 - 14.196: 72.8071% ( 53) 00:15:42.721 14.196 - 14.255: 73.2764% ( 39) 00:15:42.721 14.255 - 14.313: 73.9021% ( 52) 00:15:42.721 14.313 - 14.371: 74.6240% ( 60) 00:15:42.721 14.371 - 14.429: 75.3700% ( 62) 00:15:42.721 14.429 - 14.487: 76.2724% ( 75) 00:15:42.721 14.487 - 14.545: 76.8740% ( 50) 00:15:42.721 14.545 - 14.604: 77.5478% ( 56) 00:15:42.721 14.604 - 14.662: 78.1013% ( 46) 00:15:42.721 14.662 - 14.720: 78.6789% ( 48) 00:15:42.721 14.720 - 14.778: 79.0639% ( 32) 00:15:42.721 14.778 - 14.836: 79.4128% ( 29) 00:15:42.721 14.836 - 14.895: 79.7979% ( 32) 00:15:42.721 14.895 - 15.011: 80.2791% ( 40) 00:15:42.721 15.011 - 15.127: 80.7243% ( 37) 00:15:42.721 15.127 - 15.244: 80.9409% ( 18) 00:15:42.721 15.244 - 15.360: 81.1334% ( 16) 00:15:42.721 15.360 - 15.476: 81.2778% ( 12) 00:15:42.721 15.476 - 15.593: 81.3621% ( 7) 00:15:42.721 15.593 - 15.709: 81.4342% ( 6) 00:15:42.721 15.709 - 15.825: 81.4944% ( 5) 00:15:42.721 15.825 - 15.942: 81.5425% ( 4) 00:15:42.721 15.942 - 16.058: 81.5786% ( 3) 00:15:42.721 16.058 - 16.175: 81.6388% ( 5) 00:15:42.721 16.175 - 16.291: 81.9877% ( 29) 00:15:42.721 16.291 - 16.407: 83.0225% ( 86) 00:15:42.721 16.407 - 16.524: 83.6963% ( 56) 00:15:42.721 16.524 - 16.640: 84.1535% ( 38) 00:15:42.721 16.640 - 16.756: 84.5265% ( 31) 00:15:42.721 16.756 - 16.873: 84.7792% ( 21) 00:15:42.721 16.873 - 16.989: 84.9236% ( 12) 00:15:42.721 16.989 - 17.105: 85.1642% ( 20) 00:15:42.721 17.105 - 17.222: 85.3447% ( 15) 00:15:42.721 17.222 - 17.338: 85.4530% ( 9) 00:15:42.721 17.338 - 17.455: 85.8140% ( 30) 00:15:42.721 17.455 - 17.571: 87.1014% ( 107) 00:15:42.721 17.571 - 17.687: 88.6416% ( 128) 00:15:42.721 17.687 - 17.804: 89.5319% ( 74) 00:15:42.721 17.804 - 17.920: 90.0734% ( 45) 00:15:42.721 17.920 - 18.036: 90.3862% ( 26) 00:15:42.721 18.036 - 18.153: 90.8314% ( 37) 00:15:42.721 18.153 - 18.269: 91.3488% ( 43) 00:15:42.721 18.269 - 18.385: 91.7098% ( 30) 00:15:42.721 18.385 - 18.502: 91.8301% ( 10) 00:15:42.721 18.502 - 18.618: 92.0707% ( 20) 00:15:42.721 18.618 - 18.735: 92.2151% ( 12) 00:15:42.721 18.735 - 18.851: 92.3475% ( 11) 00:15:42.721 18.851 - 18.967: 92.4798% ( 11) 00:15:42.721 18.967 - 19.084: 92.7325% ( 21) 00:15:42.721 19.084 - 19.200: 92.8649% ( 11) 00:15:42.721 19.200 - 19.316: 92.9371% ( 6) 00:15:42.721 19.316 - 19.433: 93.1296% ( 16) 00:15:42.721 19.433 - 19.549: 93.2018% ( 6) 00:15:42.721 19.549 - 19.665: 93.3582% ( 13) 00:15:42.721 19.665 - 19.782: 93.4785% ( 10) 00:15:42.721 19.782 - 19.898: 93.6229% ( 12) 00:15:42.721 19.898 - 20.015: 93.7312% ( 9) 00:15:42.721 20.015 - 20.131: 93.7673% ( 3) 00:15:42.721 20.131 - 20.247: 93.8636% ( 8) 00:15:42.721 20.247 - 20.364: 93.9718% ( 9) 00:15:42.721 20.364 - 20.480: 94.0681% ( 8) 00:15:42.721 20.480 - 20.596: 94.1283% ( 5) 00:15:42.721 20.596 - 20.713: 94.1764% ( 4) 00:15:42.721 20.713 - 20.829: 94.2606% ( 7) 00:15:42.721 20.829 - 20.945: 94.3328% ( 6) 00:15:42.721 20.945 - 21.062: 94.3930% ( 5) 00:15:42.721 21.062 - 21.178: 94.4652% ( 6) 00:15:42.721 21.178 - 21.295: 94.5253% ( 5) 00:15:42.721 21.295 - 21.411: 94.5614% ( 3) 00:15:42.721 21.411 - 21.527: 94.5975% ( 3) 00:15:42.721 21.527 - 21.644: 94.7058% ( 9) 00:15:42.721 21.644 - 21.760: 94.7539% ( 4) 00:15:42.721 21.760 - 21.876: 94.8021% ( 4) 00:15:42.721 21.876 - 21.993: 94.8863% ( 7) 00:15:42.721 21.993 - 22.109: 94.9344% ( 4) 00:15:42.721 22.109 - 22.225: 95.0307% ( 8) 00:15:42.721 22.225 - 22.342: 95.0788% ( 4) 00:15:42.721 22.342 - 22.458: 95.1510% ( 6) 00:15:42.721 22.458 - 22.575: 95.2232% ( 6) 00:15:42.721 22.575 - 22.691: 95.2473% ( 2) 00:15:42.721 22.691 - 22.807: 95.3195% ( 6) 00:15:42.721 22.807 - 22.924: 95.3556% ( 3) 00:15:42.721 22.924 - 23.040: 95.4398% ( 7) 00:15:42.721 23.040 - 23.156: 95.5240% ( 7) 00:15:42.721 23.156 - 23.273: 95.5481% ( 2) 00:15:42.722 23.273 - 23.389: 95.6082% ( 5) 00:15:42.722 23.389 - 23.505: 95.6564% ( 4) 00:15:42.722 23.505 - 23.622: 95.7286% ( 6) 00:15:42.722 23.622 - 23.738: 95.7646% ( 3) 00:15:42.722 23.738 - 23.855: 95.8248% ( 5) 00:15:42.722 23.855 - 23.971: 95.8609% ( 3) 00:15:42.722 23.971 - 24.087: 95.9090% ( 4) 00:15:42.722 24.087 - 24.204: 95.9572% ( 4) 00:15:42.722 24.204 - 24.320: 95.9933% ( 3) 00:15:42.722 24.320 - 24.436: 96.0294% ( 3) 00:15:42.722 24.436 - 24.553: 96.0655% ( 3) 00:15:42.722 24.553 - 24.669: 96.1016% ( 3) 00:15:42.722 24.669 - 24.785: 96.1136% ( 1) 00:15:42.722 24.785 - 24.902: 96.1376% ( 2) 00:15:42.722 24.902 - 25.018: 96.2098% ( 6) 00:15:42.722 25.018 - 25.135: 96.2339% ( 2) 00:15:42.722 25.135 - 25.251: 96.2820% ( 4) 00:15:42.722 25.251 - 25.367: 96.2941% ( 1) 00:15:42.722 25.367 - 25.484: 96.3302% ( 3) 00:15:42.722 25.484 - 25.600: 96.3783% ( 4) 00:15:42.722 25.600 - 25.716: 96.4144% ( 3) 00:15:42.722 25.716 - 25.833: 96.4625% ( 4) 00:15:42.722 25.833 - 25.949: 96.5106% ( 4) 00:15:42.722 25.949 - 26.065: 96.5708% ( 5) 00:15:42.722 26.065 - 26.182: 96.6189% ( 4) 00:15:42.722 26.298 - 26.415: 96.6550% ( 3) 00:15:42.722 26.415 - 26.531: 96.6791% ( 2) 00:15:42.722 26.531 - 26.647: 96.7032% ( 2) 00:15:42.722 26.647 - 26.764: 96.7393% ( 3) 00:15:42.722 26.764 - 26.880: 96.7874% ( 4) 00:15:42.722 26.880 - 26.996: 96.8235% ( 3) 00:15:42.722 26.996 - 27.113: 96.8596% ( 3) 00:15:42.722 27.113 - 27.229: 96.8957% ( 3) 00:15:42.722 27.229 - 27.345: 97.0160% ( 10) 00:15:42.722 27.345 - 27.462: 97.1363% ( 10) 00:15:42.722 27.462 - 27.578: 97.2206% ( 7) 00:15:42.722 27.578 - 27.695: 97.4010% ( 15) 00:15:42.722 27.695 - 27.811: 97.5575% ( 13) 00:15:42.722 27.811 - 27.927: 97.7620% ( 17) 00:15:42.722 27.927 - 28.044: 97.9786% ( 18) 00:15:42.722 28.044 - 28.160: 98.1831% ( 17) 00:15:42.722 28.160 - 28.276: 98.3516% ( 14) 00:15:42.722 28.276 - 28.393: 98.4478% ( 8) 00:15:42.722 28.393 - 28.509: 98.5321% ( 7) 00:15:42.722 28.509 - 28.625: 98.5802% ( 4) 00:15:42.722 28.625 - 28.742: 98.6283% ( 4) 00:15:42.722 28.742 - 28.858: 98.6885% ( 5) 00:15:42.722 28.858 - 28.975: 98.7125% ( 2) 00:15:42.722 28.975 - 29.091: 98.7968% ( 7) 00:15:42.722 29.091 - 29.207: 98.8329% ( 3) 00:15:42.722 29.207 - 29.324: 98.8449% ( 1) 00:15:42.722 29.324 - 29.440: 98.9051% ( 5) 00:15:42.722 29.440 - 29.556: 98.9412% ( 3) 00:15:42.722 29.556 - 29.673: 99.0134% ( 6) 00:15:42.722 29.673 - 29.789: 99.0615% ( 4) 00:15:42.722 29.789 - 30.022: 99.1216% ( 5) 00:15:42.722 30.022 - 30.255: 99.1337% ( 1) 00:15:42.722 30.255 - 30.487: 99.1577% ( 2) 00:15:42.722 30.487 - 30.720: 99.1818% ( 2) 00:15:42.722 30.953 - 31.185: 99.2059% ( 2) 00:15:42.722 31.185 - 31.418: 99.2299% ( 2) 00:15:42.722 31.418 - 31.651: 99.2420% ( 1) 00:15:42.722 32.116 - 32.349: 99.2781% ( 3) 00:15:42.722 32.349 - 32.582: 99.2901% ( 1) 00:15:42.722 33.047 - 33.280: 99.3262% ( 3) 00:15:42.722 33.513 - 33.745: 99.3503% ( 2) 00:15:42.722 33.978 - 34.211: 99.4104% ( 5) 00:15:42.722 34.211 - 34.444: 99.4585% ( 4) 00:15:42.722 34.444 - 34.676: 99.4946% ( 3) 00:15:42.722 34.676 - 34.909: 99.5187% ( 2) 00:15:42.722 34.909 - 35.142: 99.5307% ( 1) 00:15:42.722 35.142 - 35.375: 99.5428% ( 1) 00:15:42.722 35.607 - 35.840: 99.5548% ( 1) 00:15:42.722 35.840 - 36.073: 99.5668% ( 1) 00:15:42.722 36.073 - 36.305: 99.5789% ( 1) 00:15:42.722 36.305 - 36.538: 99.5909% ( 1) 00:15:42.722 36.771 - 37.004: 99.6029% ( 1) 00:15:42.722 37.236 - 37.469: 99.6150% ( 1) 00:15:42.722 37.935 - 38.167: 99.6270% ( 1) 00:15:42.722 38.400 - 38.633: 99.6511% ( 2) 00:15:42.722 39.331 - 39.564: 99.6872% ( 3) 00:15:42.722 40.262 - 40.495: 99.6992% ( 1) 00:15:42.722 40.495 - 40.727: 99.7112% ( 1) 00:15:42.722 40.727 - 40.960: 99.7353% ( 2) 00:15:42.722 41.658 - 41.891: 99.7473% ( 1) 00:15:42.722 43.287 - 43.520: 99.7714% ( 2) 00:15:42.722 43.520 - 43.753: 99.7834% ( 1) 00:15:42.722 43.985 - 44.218: 99.7955% ( 1) 00:15:42.722 44.218 - 44.451: 99.8195% ( 2) 00:15:42.722 44.684 - 44.916: 99.8315% ( 1) 00:15:42.722 44.916 - 45.149: 99.8436% ( 1) 00:15:42.722 45.615 - 45.847: 99.8556% ( 1) 00:15:42.722 45.847 - 46.080: 99.8676% ( 1) 00:15:42.722 46.545 - 46.778: 99.8797% ( 1) 00:15:42.722 47.244 - 47.476: 99.8917% ( 1) 00:15:42.722 49.571 - 49.804: 99.9037% ( 1) 00:15:42.722 51.200 - 51.433: 99.9158% ( 1) 00:15:42.722 54.924 - 55.156: 99.9278% ( 1) 00:15:42.722 56.553 - 56.785: 99.9398% ( 1) 00:15:42.722 58.415 - 58.647: 99.9519% ( 1) 00:15:42.722 60.044 - 60.509: 99.9639% ( 1) 00:15:42.722 60.975 - 61.440: 99.9759% ( 1) 00:15:42.722 63.767 - 64.233: 99.9880% ( 1) 00:15:42.722 65.164 - 65.629: 100.0000% ( 1) 00:15:42.722 00:15:42.722 Complete histogram 00:15:42.722 ================== 00:15:42.722 Range in us Cumulative Count 00:15:42.722 8.553 - 8.611: 0.0120% ( 1) 00:15:42.722 8.611 - 8.669: 0.0241% ( 1) 00:15:42.722 8.727 - 8.785: 0.0481% ( 2) 00:15:42.722 8.785 - 8.844: 0.1083% ( 5) 00:15:42.722 8.844 - 8.902: 0.2527% ( 12) 00:15:42.722 8.902 - 8.960: 0.3730% ( 10) 00:15:42.722 8.960 - 9.018: 0.5415% ( 14) 00:15:42.722 9.018 - 9.076: 0.7821% ( 20) 00:15:42.722 9.076 - 9.135: 1.2273% ( 37) 00:15:42.722 9.135 - 9.193: 2.0094% ( 65) 00:15:42.722 9.193 - 9.251: 2.6351% ( 52) 00:15:42.722 9.251 - 9.309: 3.5736% ( 78) 00:15:42.722 9.309 - 9.367: 4.7648% ( 99) 00:15:42.722 9.367 - 9.425: 6.4493% ( 140) 00:15:42.722 9.425 - 9.484: 8.6271% ( 181) 00:15:42.722 9.484 - 9.542: 11.6111% ( 248) 00:15:42.722 9.542 - 9.600: 14.3424% ( 227) 00:15:42.722 9.600 - 9.658: 17.3144% ( 247) 00:15:42.722 9.658 - 9.716: 22.2356% ( 409) 00:15:42.722 9.716 - 9.775: 31.6809% ( 785) 00:15:42.722 9.775 - 9.833: 42.4016% ( 891) 00:15:42.722 9.833 - 9.891: 51.6665% ( 770) 00:15:42.722 9.891 - 9.949: 57.9232% ( 520) 00:15:42.722 9.949 - 10.007: 61.7134% ( 315) 00:15:42.722 10.007 - 10.065: 64.4086% ( 224) 00:15:42.722 10.065 - 10.124: 66.6225% ( 184) 00:15:42.722 10.124 - 10.182: 68.2469% ( 135) 00:15:42.722 10.182 - 10.240: 69.3178% ( 89) 00:15:42.722 10.240 - 10.298: 70.2322% ( 76) 00:15:42.722 10.298 - 10.356: 70.9060% ( 56) 00:15:42.722 10.356 - 10.415: 71.3031% ( 33) 00:15:42.722 10.415 - 10.473: 71.6761% ( 31) 00:15:42.722 10.473 - 10.531: 71.9408% ( 22) 00:15:42.722 10.531 - 10.589: 72.2777% ( 28) 00:15:42.722 10.589 - 10.647: 72.5905% ( 26) 00:15:42.722 10.647 - 10.705: 72.9034% ( 26) 00:15:42.722 10.705 - 10.764: 73.4208% ( 43) 00:15:42.722 10.764 - 10.822: 73.8178% ( 33) 00:15:42.722 10.822 - 10.880: 74.3593% ( 45) 00:15:42.722 10.880 - 10.938: 74.8767% ( 43) 00:15:42.722 10.938 - 10.996: 75.4662% ( 49) 00:15:42.722 10.996 - 11.055: 75.8874% ( 35) 00:15:42.722 11.055 - 11.113: 76.1882% ( 25) 00:15:42.722 11.113 - 11.171: 76.4649% ( 23) 00:15:42.722 11.171 - 11.229: 76.7778% ( 26) 00:15:42.722 11.229 - 11.287: 76.9703% ( 16) 00:15:42.722 11.287 - 11.345: 77.2470% ( 23) 00:15:42.722 11.345 - 11.404: 77.4756% ( 19) 00:15:42.722 11.404 - 11.462: 77.7403% ( 22) 00:15:42.722 11.462 - 11.520: 77.9329% ( 16) 00:15:42.722 11.520 - 11.578: 78.1735% ( 20) 00:15:42.722 11.578 - 11.636: 78.6909% ( 43) 00:15:42.722 11.636 - 11.695: 79.5452% ( 71) 00:15:42.722 11.695 - 11.753: 80.9048% ( 113) 00:15:42.722 11.753 - 11.811: 82.0118% ( 92) 00:15:42.722 11.811 - 11.869: 83.0105% ( 83) 00:15:42.722 11.869 - 11.927: 83.5760% ( 47) 00:15:42.723 11.927 - 11.985: 84.2739% ( 58) 00:15:42.723 11.985 - 12.044: 85.0439% ( 64) 00:15:42.723 12.044 - 12.102: 85.8380% ( 66) 00:15:42.723 12.102 - 12.160: 86.4517% ( 51) 00:15:42.723 12.160 - 12.218: 87.0292% ( 48) 00:15:42.723 12.218 - 12.276: 87.3782% ( 29) 00:15:42.723 12.276 - 12.335: 87.6790% ( 25) 00:15:42.723 12.335 - 12.393: 88.0881% ( 34) 00:15:42.723 12.393 - 12.451: 88.4851% ( 33) 00:15:42.723 12.451 - 12.509: 88.8220% ( 28) 00:15:42.723 12.509 - 12.567: 89.1830% ( 30) 00:15:42.723 12.567 - 12.625: 89.4357% ( 21) 00:15:42.723 12.625 - 12.684: 89.8568% ( 35) 00:15:42.723 12.684 - 12.742: 90.1697% ( 26) 00:15:42.723 12.742 - 12.800: 90.3983% ( 19) 00:15:42.723 12.800 - 12.858: 90.6389% ( 20) 00:15:42.723 12.858 - 12.916: 90.8435% ( 17) 00:15:42.723 12.916 - 12.975: 90.9878% ( 12) 00:15:42.723 12.975 - 13.033: 91.1563% ( 14) 00:15:42.723 13.033 - 13.091: 91.3248% ( 14) 00:15:42.723 13.091 - 13.149: 91.4210% ( 8) 00:15:42.723 13.149 - 13.207: 91.4691% ( 4) 00:15:42.723 13.207 - 13.265: 91.6015% ( 11) 00:15:42.723 13.265 - 13.324: 91.7459% ( 12) 00:15:42.723 13.324 - 13.382: 91.8542% ( 9) 00:15:42.723 13.382 - 13.440: 91.9865% ( 11) 00:15:42.723 13.440 - 13.498: 92.0106% ( 2) 00:15:42.723 13.498 - 13.556: 92.1189% ( 9) 00:15:42.723 13.556 - 13.615: 92.1309% ( 1) 00:15:42.723 13.615 - 13.673: 92.2151% ( 7) 00:15:42.723 13.673 - 13.731: 92.2512% ( 3) 00:15:42.723 13.731 - 13.789: 92.2633% ( 1) 00:15:42.723 13.789 - 13.847: 92.3114% ( 4) 00:15:42.723 13.847 - 13.905: 92.3475% ( 3) 00:15:42.723 13.905 - 13.964: 92.3956% ( 4) 00:15:42.723 13.964 - 14.022: 92.4317% ( 3) 00:15:42.723 14.022 - 14.080: 92.4919% ( 5) 00:15:42.723 14.080 - 14.138: 92.5039% ( 1) 00:15:42.723 14.138 - 14.196: 92.5520% ( 4) 00:15:42.723 14.196 - 14.255: 92.5761% ( 2) 00:15:42.723 14.255 - 14.313: 92.6122% ( 3) 00:15:42.723 14.313 - 14.371: 92.6242% ( 1) 00:15:42.723 14.371 - 14.429: 92.6483% ( 2) 00:15:42.723 14.429 - 14.487: 92.6964% ( 4) 00:15:42.723 14.487 - 14.545: 92.7205% ( 2) 00:15:42.723 14.545 - 14.604: 92.7566% ( 3) 00:15:42.723 14.604 - 14.662: 92.8167% ( 5) 00:15:42.723 14.662 - 14.720: 92.8288% ( 1) 00:15:42.723 14.720 - 14.778: 92.8528% ( 2) 00:15:42.723 14.778 - 14.836: 92.8649% ( 1) 00:15:42.723 14.836 - 14.895: 92.9250% ( 5) 00:15:42.723 14.895 - 15.011: 92.9972% ( 6) 00:15:42.723 15.011 - 15.127: 93.0574% ( 5) 00:15:42.723 15.127 - 15.244: 93.0815% ( 2) 00:15:42.723 15.244 - 15.360: 93.1176% ( 3) 00:15:42.723 15.360 - 15.476: 93.1897% ( 6) 00:15:42.723 15.476 - 15.593: 93.2138% ( 2) 00:15:42.723 15.593 - 15.709: 93.2499% ( 3) 00:15:42.723 15.709 - 15.825: 93.3221% ( 6) 00:15:42.723 15.825 - 15.942: 93.4063% ( 7) 00:15:42.723 15.942 - 16.058: 93.5387% ( 11) 00:15:42.723 16.058 - 16.175: 93.6470% ( 9) 00:15:42.723 16.175 - 16.291: 93.7553% ( 9) 00:15:42.723 16.291 - 16.407: 93.8275% ( 6) 00:15:42.723 16.407 - 16.524: 93.8756% ( 4) 00:15:42.723 16.524 - 16.640: 93.9357% ( 5) 00:15:42.723 16.640 - 16.756: 93.9598% ( 2) 00:15:42.723 16.756 - 16.873: 93.9959% ( 3) 00:15:42.723 16.873 - 16.989: 94.0440% ( 4) 00:15:42.723 16.989 - 17.105: 94.1162% ( 6) 00:15:42.723 17.105 - 17.222: 94.1884% ( 6) 00:15:42.723 17.222 - 17.338: 94.2125% ( 2) 00:15:42.723 17.338 - 17.455: 94.2486% ( 3) 00:15:42.723 17.455 - 17.571: 94.2967% ( 4) 00:15:42.723 17.571 - 17.687: 94.3448% ( 4) 00:15:42.723 17.687 - 17.804: 94.4170% ( 6) 00:15:42.723 17.804 - 17.920: 94.4652% ( 4) 00:15:42.723 17.920 - 18.036: 94.4772% ( 1) 00:15:42.723 18.036 - 18.153: 94.5013% ( 2) 00:15:42.723 18.153 - 18.269: 94.5374% ( 3) 00:15:42.723 18.269 - 18.385: 94.5494% ( 1) 00:15:42.723 18.385 - 18.502: 94.5735% ( 2) 00:15:42.723 18.618 - 18.735: 94.6216% ( 4) 00:15:42.723 18.735 - 18.851: 94.6336% ( 1) 00:15:42.723 18.851 - 18.967: 94.6577% ( 2) 00:15:42.723 18.967 - 19.084: 94.6817% ( 2) 00:15:42.723 19.084 - 19.200: 94.7299% ( 4) 00:15:42.723 19.200 - 19.316: 94.7419% ( 1) 00:15:42.723 19.316 - 19.433: 94.8021% ( 5) 00:15:42.723 19.433 - 19.549: 94.8261% ( 2) 00:15:42.723 19.549 - 19.665: 94.8622% ( 3) 00:15:42.723 19.665 - 19.782: 94.9224% ( 5) 00:15:42.723 19.782 - 19.898: 94.9585% ( 3) 00:15:42.723 19.898 - 20.015: 94.9705% ( 1) 00:15:42.723 20.015 - 20.131: 94.9946% ( 2) 00:15:42.723 20.131 - 20.247: 95.0066% ( 1) 00:15:42.723 20.247 - 20.364: 95.1269% ( 10) 00:15:42.723 20.364 - 20.480: 95.1751% ( 4) 00:15:42.723 20.480 - 20.596: 95.2232% ( 4) 00:15:42.723 20.713 - 20.829: 95.2593% ( 3) 00:15:42.723 20.829 - 20.945: 95.2954% ( 3) 00:15:42.723 20.945 - 21.062: 95.3315% ( 3) 00:15:42.723 21.062 - 21.178: 95.3676% ( 3) 00:15:42.723 21.178 - 21.295: 95.4157% ( 4) 00:15:42.723 21.295 - 21.411: 95.4398% ( 2) 00:15:42.723 21.411 - 21.527: 95.4999% ( 5) 00:15:42.723 21.527 - 21.644: 95.5120% ( 1) 00:15:42.723 21.644 - 21.760: 95.5240% ( 1) 00:15:42.723 21.760 - 21.876: 95.5481% ( 2) 00:15:42.723 21.876 - 21.993: 95.5601% ( 1) 00:15:42.723 21.993 - 22.109: 95.5842% ( 2) 00:15:42.723 22.109 - 22.225: 95.6082% ( 2) 00:15:42.723 22.225 - 22.342: 95.6323% ( 2) 00:15:42.723 22.342 - 22.458: 95.6564% ( 2) 00:15:42.723 22.458 - 22.575: 95.7045% ( 4) 00:15:42.723 22.575 - 22.691: 95.7406% ( 3) 00:15:42.723 22.691 - 22.807: 95.7526% ( 1) 00:15:42.723 22.924 - 23.040: 95.8007% ( 4) 00:15:42.723 23.040 - 23.156: 95.8128% ( 1) 00:15:42.723 23.156 - 23.273: 95.8248% ( 1) 00:15:42.723 23.389 - 23.505: 95.8489% ( 2) 00:15:42.723 23.505 - 23.622: 95.9211% ( 6) 00:15:42.723 23.622 - 23.738: 96.0053% ( 7) 00:15:42.723 23.738 - 23.855: 96.1016% ( 8) 00:15:42.723 23.855 - 23.971: 96.1617% ( 5) 00:15:42.723 23.971 - 24.087: 96.3181% ( 13) 00:15:42.723 24.087 - 24.204: 96.5588% ( 20) 00:15:42.723 24.204 - 24.320: 96.9919% ( 36) 00:15:42.723 24.320 - 24.436: 97.3529% ( 30) 00:15:42.723 24.436 - 24.553: 97.6657% ( 26) 00:15:42.723 24.553 - 24.669: 97.9786% ( 26) 00:15:42.723 24.669 - 24.785: 98.1711% ( 16) 00:15:42.723 24.785 - 24.902: 98.2674% ( 8) 00:15:42.723 24.902 - 25.018: 98.3636% ( 8) 00:15:42.723 25.018 - 25.135: 98.5200% ( 13) 00:15:42.723 25.135 - 25.251: 98.5922% ( 6) 00:15:42.723 25.251 - 25.367: 98.7005% ( 9) 00:15:42.723 25.484 - 25.600: 98.7968% ( 8) 00:15:42.723 25.600 - 25.716: 98.8329% ( 3) 00:15:42.723 25.716 - 25.833: 98.9773% ( 12) 00:15:42.723 25.833 - 25.949: 99.0134% ( 3) 00:15:42.723 26.065 - 26.182: 99.0495% ( 3) 00:15:42.723 26.182 - 26.298: 99.0855% ( 3) 00:15:42.723 26.298 - 26.415: 99.1096% ( 2) 00:15:42.723 26.415 - 26.531: 99.1577% ( 4) 00:15:42.723 26.531 - 26.647: 99.1698% ( 1) 00:15:42.723 26.647 - 26.764: 99.1818% ( 1) 00:15:42.723 26.764 - 26.880: 99.1938% ( 1) 00:15:42.723 26.880 - 26.996: 99.2059% ( 1) 00:15:42.723 26.996 - 27.113: 99.2179% ( 1) 00:15:42.723 27.113 - 27.229: 99.2299% ( 1) 00:15:42.723 27.345 - 27.462: 99.2540% ( 2) 00:15:42.723 27.462 - 27.578: 99.2781% ( 2) 00:15:42.723 27.695 - 27.811: 99.3021% ( 2) 00:15:42.723 27.927 - 28.044: 99.3142% ( 1) 00:15:42.723 28.044 - 28.160: 99.3262% ( 1) 00:15:42.723 28.742 - 28.858: 99.3382% ( 1) 00:15:42.723 28.858 - 28.975: 99.3503% ( 1) 00:15:42.723 29.324 - 29.440: 99.3623% ( 1) 00:15:42.723 29.556 - 29.673: 99.3743% ( 1) 00:15:42.723 29.673 - 29.789: 99.3864% ( 1) 00:15:42.723 30.022 - 30.255: 99.3984% ( 1) 00:15:42.723 30.255 - 30.487: 99.4225% ( 2) 00:15:42.723 30.487 - 30.720: 99.4465% ( 2) 00:15:42.723 30.720 - 30.953: 99.4585% ( 1) 00:15:42.723 30.953 - 31.185: 99.4946% ( 3) 00:15:42.723 31.651 - 31.884: 99.5187% ( 2) 00:15:42.723 31.884 - 32.116: 99.5428% ( 2) 00:15:42.723 32.815 - 33.047: 99.5548% ( 1) 00:15:42.723 33.047 - 33.280: 99.5789% ( 2) 00:15:42.723 33.280 - 33.513: 99.5909% ( 1) 00:15:42.723 33.978 - 34.211: 99.6029% ( 1) 00:15:42.723 34.444 - 34.676: 99.6150% ( 1) 00:15:42.723 34.909 - 35.142: 99.6511% ( 3) 00:15:42.723 35.375 - 35.607: 99.6631% ( 1) 00:15:42.724 35.607 - 35.840: 99.6751% ( 1) 00:15:42.724 37.702 - 37.935: 99.6872% ( 1) 00:15:42.724 38.633 - 38.865: 99.6992% ( 1) 00:15:42.724 38.865 - 39.098: 99.7112% ( 1) 00:15:42.724 39.564 - 39.796: 99.7233% ( 1) 00:15:42.724 39.796 - 40.029: 99.7473% ( 2) 00:15:42.724 40.029 - 40.262: 99.7594% ( 1) 00:15:42.724 41.425 - 41.658: 99.7714% ( 1) 00:15:42.724 41.891 - 42.124: 99.7955% ( 2) 00:15:42.724 43.287 - 43.520: 99.8195% ( 2) 00:15:42.724 43.520 - 43.753: 99.8315% ( 1) 00:15:42.724 45.149 - 45.382: 99.8436% ( 1) 00:15:42.724 47.476 - 47.709: 99.8556% ( 1) 00:15:42.724 47.942 - 48.175: 99.8676% ( 1) 00:15:42.724 50.735 - 50.967: 99.8797% ( 1) 00:15:42.724 53.062 - 53.295: 99.8917% ( 1) 00:15:42.724 54.924 - 55.156: 99.9037% ( 1) 00:15:42.724 55.855 - 56.087: 99.9158% ( 1) 00:15:42.724 56.320 - 56.553: 99.9278% ( 1) 00:15:42.724 59.578 - 60.044: 99.9398% ( 1) 00:15:42.724 61.440 - 61.905: 99.9639% ( 2) 00:15:42.724 65.629 - 66.095: 99.9759% ( 1) 00:15:42.724 67.025 - 67.491: 99.9880% ( 1) 00:15:42.724 1169.222 - 1176.669: 100.0000% ( 1) 00:15:42.724 00:15:42.724 00:15:42.724 real 0m1.321s 00:15:42.724 user 0m1.104s 00:15:42.724 sys 0m0.166s 00:15:42.724 17:16:28 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:42.724 17:16:28 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:15:42.724 ************************************ 00:15:42.724 END TEST nvme_overhead 00:15:42.724 ************************************ 00:15:42.724 17:16:28 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:15:42.724 17:16:28 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:15:42.724 17:16:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:42.724 17:16:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:42.724 ************************************ 00:15:42.724 START TEST nvme_arbitration 00:15:42.724 ************************************ 00:15:42.724 17:16:28 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:15:46.010 Initializing NVMe Controllers 00:15:46.010 Attached to 0000:00:10.0 00:15:46.010 Attached to 0000:00:11.0 00:15:46.010 Attached to 0000:00:13.0 00:15:46.010 Attached to 0000:00:12.0 00:15:46.010 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:15:46.010 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:15:46.010 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:15:46.010 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:15:46.010 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:15:46.010 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:15:46.011 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:15:46.011 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:15:46.011 Initialization complete. Launching workers. 00:15:46.011 Starting thread on core 1 with urgent priority queue 00:15:46.011 Starting thread on core 2 with urgent priority queue 00:15:46.011 Starting thread on core 3 with urgent priority queue 00:15:46.011 Starting thread on core 0 with urgent priority queue 00:15:46.011 QEMU NVMe Ctrl (12340 ) core 0: 490.67 IO/s 203.80 secs/100000 ios 00:15:46.011 QEMU NVMe Ctrl (12342 ) core 0: 490.67 IO/s 203.80 secs/100000 ios 00:15:46.011 QEMU NVMe Ctrl (12341 ) core 1: 512.00 IO/s 195.31 secs/100000 ios 00:15:46.011 QEMU NVMe Ctrl (12342 ) core 1: 512.00 IO/s 195.31 secs/100000 ios 00:15:46.011 QEMU NVMe Ctrl (12343 ) core 2: 960.00 IO/s 104.17 secs/100000 ios 00:15:46.011 QEMU NVMe Ctrl (12342 ) core 3: 533.33 IO/s 187.50 secs/100000 ios 00:15:46.011 ======================================================== 00:15:46.011 00:15:46.011 ************************************ 00:15:46.011 END TEST nvme_arbitration 00:15:46.011 ************************************ 00:15:46.011 00:15:46.011 real 0m3.453s 00:15:46.011 user 0m9.371s 00:15:46.011 sys 0m0.170s 00:15:46.011 17:16:32 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:46.011 17:16:32 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:15:46.269 17:16:32 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:15:46.269 17:16:32 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:46.269 17:16:32 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:46.269 17:16:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:46.269 ************************************ 00:15:46.269 START TEST nvme_single_aen 00:15:46.269 ************************************ 00:15:46.269 17:16:32 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:15:46.529 Asynchronous Event Request test 00:15:46.529 Attached to 0000:00:10.0 00:15:46.529 Attached to 0000:00:11.0 00:15:46.529 Attached to 0000:00:13.0 00:15:46.529 Attached to 0000:00:12.0 00:15:46.529 Reset controller to setup AER completions for this process 00:15:46.529 Registering asynchronous event callbacks... 00:15:46.529 Getting orig temperature thresholds of all controllers 00:15:46.529 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:46.529 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:46.529 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:46.529 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:46.529 Setting all controllers temperature threshold low to trigger AER 00:15:46.529 Waiting for all controllers temperature threshold to be set lower 00:15:46.529 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:46.529 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:15:46.529 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:46.529 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:15:46.529 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:46.529 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:15:46.529 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:46.529 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:15:46.529 Waiting for all controllers to trigger AER and reset threshold 00:15:46.529 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:46.529 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:46.529 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:46.529 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:46.529 Cleaning up... 00:15:46.529 ************************************ 00:15:46.529 END TEST nvme_single_aen 00:15:46.529 ************************************ 00:15:46.529 00:15:46.529 real 0m0.329s 00:15:46.529 user 0m0.131s 00:15:46.529 sys 0m0.146s 00:15:46.529 17:16:32 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:46.529 17:16:32 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:15:46.529 17:16:32 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:15:46.529 17:16:32 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:46.529 17:16:32 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:46.529 17:16:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:46.529 ************************************ 00:15:46.529 START TEST nvme_doorbell_aers 00:15:46.529 ************************************ 00:15:46.529 17:16:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:15:46.529 17:16:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:15:46.529 17:16:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:15:46.529 17:16:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:15:46.529 17:16:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:15:46.529 17:16:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:15:46.529 17:16:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:15:46.529 17:16:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:46.529 17:16:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:46.529 17:16:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:15:46.529 17:16:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:15:46.529 17:16:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:46.529 17:16:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:15:46.529 17:16:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:47.096 [2024-07-24 17:16:33.028993] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68571) is not found. Dropping the request. 00:15:57.066 Executing: test_write_invalid_db 00:15:57.066 Waiting for AER completion... 00:15:57.066 Failure: test_write_invalid_db 00:15:57.066 00:15:57.066 Executing: test_invalid_db_write_overflow_sq 00:15:57.066 Waiting for AER completion... 00:15:57.066 Failure: test_invalid_db_write_overflow_sq 00:15:57.066 00:15:57.066 Executing: test_invalid_db_write_overflow_cq 00:15:57.066 Waiting for AER completion... 00:15:57.066 Failure: test_invalid_db_write_overflow_cq 00:15:57.066 00:15:57.066 17:16:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:15:57.066 17:16:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:15:57.066 [2024-07-24 17:16:43.064511] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68571) is not found. Dropping the request. 00:16:07.041 Executing: test_write_invalid_db 00:16:07.041 Waiting for AER completion... 00:16:07.041 Failure: test_write_invalid_db 00:16:07.041 00:16:07.041 Executing: test_invalid_db_write_overflow_sq 00:16:07.041 Waiting for AER completion... 00:16:07.041 Failure: test_invalid_db_write_overflow_sq 00:16:07.041 00:16:07.041 Executing: test_invalid_db_write_overflow_cq 00:16:07.041 Waiting for AER completion... 00:16:07.041 Failure: test_invalid_db_write_overflow_cq 00:16:07.041 00:16:07.041 17:16:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:16:07.041 17:16:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:16:07.042 [2024-07-24 17:16:53.093834] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68571) is not found. Dropping the request. 00:16:17.013 Executing: test_write_invalid_db 00:16:17.013 Waiting for AER completion... 00:16:17.013 Failure: test_write_invalid_db 00:16:17.013 00:16:17.013 Executing: test_invalid_db_write_overflow_sq 00:16:17.013 Waiting for AER completion... 00:16:17.013 Failure: test_invalid_db_write_overflow_sq 00:16:17.013 00:16:17.013 Executing: test_invalid_db_write_overflow_cq 00:16:17.013 Waiting for AER completion... 00:16:17.013 Failure: test_invalid_db_write_overflow_cq 00:16:17.013 00:16:17.013 17:17:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:16:17.013 17:17:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:16:17.013 [2024-07-24 17:17:03.134889] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68571) is not found. Dropping the request. 00:16:27.066 Executing: test_write_invalid_db 00:16:27.066 Waiting for AER completion... 00:16:27.066 Failure: test_write_invalid_db 00:16:27.066 00:16:27.066 Executing: test_invalid_db_write_overflow_sq 00:16:27.066 Waiting for AER completion... 00:16:27.066 Failure: test_invalid_db_write_overflow_sq 00:16:27.066 00:16:27.066 Executing: test_invalid_db_write_overflow_cq 00:16:27.066 Waiting for AER completion... 00:16:27.066 Failure: test_invalid_db_write_overflow_cq 00:16:27.066 00:16:27.066 ************************************ 00:16:27.066 END TEST nvme_doorbell_aers 00:16:27.066 ************************************ 00:16:27.066 00:16:27.066 real 0m40.250s 00:16:27.066 user 0m34.061s 00:16:27.066 sys 0m5.792s 00:16:27.066 17:17:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:27.066 17:17:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:16:27.066 17:17:12 nvme -- nvme/nvme.sh@97 -- # uname 00:16:27.066 17:17:12 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:16:27.066 17:17:12 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:16:27.066 17:17:12 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:16:27.066 17:17:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:27.066 17:17:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.066 ************************************ 00:16:27.066 START TEST nvme_multi_aen 00:16:27.066 ************************************ 00:16:27.066 17:17:12 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:16:27.066 [2024-07-24 17:17:13.216291] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68571) is not found. Dropping the request. 00:16:27.066 [2024-07-24 17:17:13.216422] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68571) is not found. Dropping the request. 00:16:27.066 [2024-07-24 17:17:13.216444] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68571) is not found. Dropping the request. 00:16:27.066 [2024-07-24 17:17:13.218354] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68571) is not found. Dropping the request. 00:16:27.066 [2024-07-24 17:17:13.218406] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68571) is not found. Dropping the request. 00:16:27.066 [2024-07-24 17:17:13.218425] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68571) is not found. Dropping the request. 00:16:27.066 [2024-07-24 17:17:13.220063] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68571) is not found. Dropping the request. 00:16:27.066 [2024-07-24 17:17:13.220127] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68571) is not found. Dropping the request. 00:16:27.066 [2024-07-24 17:17:13.220157] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68571) is not found. Dropping the request. 00:16:27.066 [2024-07-24 17:17:13.221603] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68571) is not found. Dropping the request. 00:16:27.066 [2024-07-24 17:17:13.221677] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68571) is not found. Dropping the request. 00:16:27.066 [2024-07-24 17:17:13.221696] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68571) is not found. Dropping the request. 00:16:27.066 Child process pid: 69087 00:16:27.325 [Child] Asynchronous Event Request test 00:16:27.325 [Child] Attached to 0000:00:10.0 00:16:27.325 [Child] Attached to 0000:00:11.0 00:16:27.325 [Child] Attached to 0000:00:13.0 00:16:27.325 [Child] Attached to 0000:00:12.0 00:16:27.325 [Child] Registering asynchronous event callbacks... 00:16:27.325 [Child] Getting orig temperature thresholds of all controllers 00:16:27.325 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:27.325 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:27.325 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:27.325 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:27.325 [Child] Waiting for all controllers to trigger AER and reset threshold 00:16:27.325 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:27.325 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:27.325 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:27.325 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:27.325 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:27.325 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:27.325 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:27.326 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:27.326 [Child] Cleaning up... 00:16:27.584 Asynchronous Event Request test 00:16:27.584 Attached to 0000:00:10.0 00:16:27.584 Attached to 0000:00:11.0 00:16:27.584 Attached to 0000:00:13.0 00:16:27.584 Attached to 0000:00:12.0 00:16:27.584 Reset controller to setup AER completions for this process 00:16:27.584 Registering asynchronous event callbacks... 00:16:27.584 Getting orig temperature thresholds of all controllers 00:16:27.584 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:27.584 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:27.584 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:27.584 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:27.584 Setting all controllers temperature threshold low to trigger AER 00:16:27.584 Waiting for all controllers temperature threshold to be set lower 00:16:27.584 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:27.584 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:16:27.584 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:27.584 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:16:27.584 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:27.584 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:16:27.584 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:27.584 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:16:27.584 Waiting for all controllers to trigger AER and reset threshold 00:16:27.584 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:27.584 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:27.584 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:27.584 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:27.584 Cleaning up... 00:16:27.584 ************************************ 00:16:27.584 END TEST nvme_multi_aen 00:16:27.584 ************************************ 00:16:27.584 00:16:27.584 real 0m0.649s 00:16:27.584 user 0m0.232s 00:16:27.584 sys 0m0.311s 00:16:27.584 17:17:13 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:27.584 17:17:13 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:16:27.584 17:17:13 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:16:27.584 17:17:13 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:27.584 17:17:13 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:27.584 17:17:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.584 ************************************ 00:16:27.584 START TEST nvme_startup 00:16:27.584 ************************************ 00:16:27.584 17:17:13 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:16:27.842 Initializing NVMe Controllers 00:16:27.842 Attached to 0000:00:10.0 00:16:27.842 Attached to 0000:00:11.0 00:16:27.842 Attached to 0000:00:13.0 00:16:27.842 Attached to 0000:00:12.0 00:16:27.842 Initialization complete. 00:16:27.842 Time used:224231.078 (us). 00:16:27.842 ************************************ 00:16:27.842 END TEST nvme_startup 00:16:27.842 ************************************ 00:16:27.842 00:16:27.842 real 0m0.330s 00:16:27.842 user 0m0.120s 00:16:27.842 sys 0m0.156s 00:16:27.842 17:17:13 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:27.842 17:17:13 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:16:27.842 17:17:14 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:16:27.842 17:17:14 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:27.842 17:17:14 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:27.842 17:17:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.842 ************************************ 00:16:27.842 START TEST nvme_multi_secondary 00:16:27.842 ************************************ 00:16:27.842 17:17:14 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:16:27.842 17:17:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=69143 00:16:27.842 17:17:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:16:27.842 17:17:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=69144 00:16:27.842 17:17:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:16:27.842 17:17:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:16:32.024 Initializing NVMe Controllers 00:16:32.024 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:32.024 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:16:32.024 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:16:32.024 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:16:32.024 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:16:32.024 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:16:32.024 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:16:32.024 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:16:32.024 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:16:32.024 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:16:32.024 Initialization complete. Launching workers. 00:16:32.024 ======================================================== 00:16:32.024 Latency(us) 00:16:32.024 Device Information : IOPS MiB/s Average min max 00:16:32.024 PCIE (0000:00:10.0) NSID 1 from core 2: 2094.24 8.18 7637.96 1380.41 15144.67 00:16:32.024 PCIE (0000:00:11.0) NSID 1 from core 2: 2094.24 8.18 7639.38 1300.79 15070.63 00:16:32.024 PCIE (0000:00:13.0) NSID 1 from core 2: 2094.24 8.18 7651.12 1339.87 16218.85 00:16:32.024 PCIE (0000:00:12.0) NSID 1 from core 2: 2094.24 8.18 7650.91 1454.84 16652.20 00:16:32.024 PCIE (0000:00:12.0) NSID 2 from core 2: 2094.24 8.18 7651.43 1258.36 17908.79 00:16:32.024 PCIE (0000:00:12.0) NSID 3 from core 2: 2094.24 8.18 7651.54 1366.34 15645.30 00:16:32.024 ======================================================== 00:16:32.024 Total : 12565.45 49.08 7647.06 1258.36 17908.79 00:16:32.024 00:16:32.024 17:17:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 69143 00:16:32.024 Initializing NVMe Controllers 00:16:32.024 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:32.024 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:16:32.024 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:16:32.024 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:16:32.024 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:16:32.024 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:16:32.024 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:16:32.024 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:16:32.024 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:16:32.024 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:16:32.024 Initialization complete. Launching workers. 00:16:32.024 ======================================================== 00:16:32.024 Latency(us) 00:16:32.024 Device Information : IOPS MiB/s Average min max 00:16:32.024 PCIE (0000:00:10.0) NSID 1 from core 1: 4909.14 19.18 3257.04 1335.76 15703.28 00:16:32.024 PCIE (0000:00:11.0) NSID 1 from core 1: 4909.14 19.18 3258.60 1463.21 15439.10 00:16:32.024 PCIE (0000:00:13.0) NSID 1 from core 1: 4909.14 19.18 3258.38 1263.37 15468.07 00:16:32.024 PCIE (0000:00:12.0) NSID 1 from core 1: 4909.14 19.18 3258.31 1320.49 15353.43 00:16:32.024 PCIE (0000:00:12.0) NSID 2 from core 1: 4909.14 19.18 3258.19 1314.79 15520.80 00:16:32.024 PCIE (0000:00:12.0) NSID 3 from core 1: 4909.14 19.18 3258.09 1322.21 15329.45 00:16:32.024 ======================================================== 00:16:32.024 Total : 29454.86 115.06 3258.10 1263.37 15703.28 00:16:32.024 00:16:33.400 Initializing NVMe Controllers 00:16:33.400 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:33.400 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:16:33.400 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:16:33.400 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:16:33.400 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:33.400 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:16:33.400 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:16:33.400 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:16:33.400 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:16:33.400 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:16:33.400 Initialization complete. Launching workers. 00:16:33.400 ======================================================== 00:16:33.400 Latency(us) 00:16:33.400 Device Information : IOPS MiB/s Average min max 00:16:33.400 PCIE (0000:00:10.0) NSID 1 from core 0: 7054.29 27.56 2266.20 978.59 8416.45 00:16:33.400 PCIE (0000:00:11.0) NSID 1 from core 0: 7054.29 27.56 2267.59 993.51 8035.09 00:16:33.400 PCIE (0000:00:13.0) NSID 1 from core 0: 7054.29 27.56 2267.52 1025.39 7615.58 00:16:33.400 PCIE (0000:00:12.0) NSID 1 from core 0: 7054.29 27.56 2267.46 989.19 7592.48 00:16:33.400 PCIE (0000:00:12.0) NSID 2 from core 0: 7054.29 27.56 2267.38 966.56 7952.08 00:16:33.400 PCIE (0000:00:12.0) NSID 3 from core 0: 7054.29 27.56 2267.30 893.64 7906.75 00:16:33.400 ======================================================== 00:16:33.400 Total : 42325.75 165.33 2267.24 893.64 8416.45 00:16:33.400 00:16:33.400 17:17:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 69144 00:16:33.400 17:17:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=69213 00:16:33.400 17:17:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:16:33.400 17:17:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=69214 00:16:33.400 17:17:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:16:33.400 17:17:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:16:36.683 Initializing NVMe Controllers 00:16:36.683 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:36.683 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:16:36.683 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:16:36.683 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:16:36.683 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:16:36.683 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:16:36.683 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:16:36.683 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:16:36.683 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:16:36.683 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:16:36.683 Initialization complete. Launching workers. 00:16:36.683 ======================================================== 00:16:36.683 Latency(us) 00:16:36.683 Device Information : IOPS MiB/s Average min max 00:16:36.683 PCIE (0000:00:10.0) NSID 1 from core 1: 5006.52 19.56 3193.81 1031.30 9055.24 00:16:36.683 PCIE (0000:00:11.0) NSID 1 from core 1: 5006.52 19.56 3195.19 1054.84 9640.60 00:16:36.683 PCIE (0000:00:13.0) NSID 1 from core 1: 5006.52 19.56 3195.06 1024.58 10687.61 00:16:36.683 PCIE (0000:00:12.0) NSID 1 from core 1: 5006.52 19.56 3194.97 1051.26 8668.28 00:16:36.683 PCIE (0000:00:12.0) NSID 2 from core 1: 5006.52 19.56 3194.85 1064.80 9711.93 00:16:36.683 PCIE (0000:00:12.0) NSID 3 from core 1: 5006.52 19.56 3194.69 1064.39 9263.11 00:16:36.683 ======================================================== 00:16:36.683 Total : 30039.12 117.34 3194.76 1024.58 10687.61 00:16:36.683 00:16:36.942 Initializing NVMe Controllers 00:16:36.942 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:36.942 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:16:36.942 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:16:36.942 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:16:36.942 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:36.942 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:16:36.942 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:16:36.942 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:16:36.942 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:16:36.942 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:16:36.942 Initialization complete. Launching workers. 00:16:36.942 ======================================================== 00:16:36.942 Latency(us) 00:16:36.942 Device Information : IOPS MiB/s Average min max 00:16:36.942 PCIE (0000:00:10.0) NSID 1 from core 0: 4862.99 19.00 3288.08 1102.76 9036.75 00:16:36.942 PCIE (0000:00:11.0) NSID 1 from core 0: 4862.99 19.00 3289.82 1123.10 9381.13 00:16:36.942 PCIE (0000:00:13.0) NSID 1 from core 0: 4862.99 19.00 3290.14 1123.89 9444.81 00:16:36.942 PCIE (0000:00:12.0) NSID 1 from core 0: 4862.99 19.00 3290.32 1129.70 8154.32 00:16:36.942 PCIE (0000:00:12.0) NSID 2 from core 0: 4862.99 19.00 3290.60 1112.37 7772.43 00:16:36.942 PCIE (0000:00:12.0) NSID 3 from core 0: 4862.99 19.00 3291.20 1142.78 8053.64 00:16:36.942 ======================================================== 00:16:36.942 Total : 29177.93 113.98 3290.03 1102.76 9444.81 00:16:36.942 00:16:38.846 Initializing NVMe Controllers 00:16:38.846 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:38.846 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:16:38.846 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:16:38.846 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:16:38.846 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:16:38.846 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:16:38.846 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:16:38.846 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:16:38.846 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:16:38.846 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:16:38.846 Initialization complete. Launching workers. 00:16:38.846 ======================================================== 00:16:38.846 Latency(us) 00:16:38.846 Device Information : IOPS MiB/s Average min max 00:16:38.846 PCIE (0000:00:10.0) NSID 1 from core 2: 3182.13 12.43 5025.28 1021.76 18816.48 00:16:38.846 PCIE (0000:00:11.0) NSID 1 from core 2: 3182.13 12.43 5023.49 1058.99 16869.14 00:16:38.846 PCIE (0000:00:13.0) NSID 1 from core 2: 3185.32 12.44 5018.12 1042.00 18670.60 00:16:38.846 PCIE (0000:00:12.0) NSID 1 from core 2: 3185.32 12.44 5018.03 1094.69 19521.66 00:16:38.846 PCIE (0000:00:12.0) NSID 2 from core 2: 3182.13 12.43 5022.97 1074.89 18589.12 00:16:38.846 PCIE (0000:00:12.0) NSID 3 from core 2: 3182.13 12.43 5022.86 1034.84 18544.53 00:16:38.846 ======================================================== 00:16:38.846 Total : 19099.15 74.61 5021.79 1021.76 19521.66 00:16:38.846 00:16:38.846 ************************************ 00:16:38.846 END TEST nvme_multi_secondary 00:16:38.846 ************************************ 00:16:38.846 17:17:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 69213 00:16:38.846 17:17:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 69214 00:16:38.846 00:16:38.846 real 0m11.023s 00:16:38.846 user 0m18.569s 00:16:38.846 sys 0m1.045s 00:16:38.846 17:17:25 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.846 17:17:25 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:16:39.104 17:17:25 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:16:39.104 17:17:25 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:16:39.104 17:17:25 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/68134 ]] 00:16:39.104 17:17:25 nvme -- common/autotest_common.sh@1090 -- # kill 68134 00:16:39.104 17:17:25 nvme -- common/autotest_common.sh@1091 -- # wait 68134 00:16:39.104 [2024-07-24 17:17:25.100056] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69086) is not found. Dropping the request. 00:16:39.104 [2024-07-24 17:17:25.100160] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69086) is not found. Dropping the request. 00:16:39.104 [2024-07-24 17:17:25.100195] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69086) is not found. Dropping the request. 00:16:39.104 [2024-07-24 17:17:25.100217] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69086) is not found. Dropping the request. 00:16:39.104 [2024-07-24 17:17:25.102484] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69086) is not found. Dropping the request. 00:16:39.104 [2024-07-24 17:17:25.102557] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69086) is not found. Dropping the request. 00:16:39.104 [2024-07-24 17:17:25.102582] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69086) is not found. Dropping the request. 00:16:39.104 [2024-07-24 17:17:25.102622] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69086) is not found. Dropping the request. 00:16:39.104 [2024-07-24 17:17:25.104782] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69086) is not found. Dropping the request. 00:16:39.104 [2024-07-24 17:17:25.104839] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69086) is not found. Dropping the request. 00:16:39.104 [2024-07-24 17:17:25.104863] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69086) is not found. Dropping the request. 00:16:39.104 [2024-07-24 17:17:25.104902] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69086) is not found. Dropping the request. 00:16:39.104 [2024-07-24 17:17:25.107109] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69086) is not found. Dropping the request. 00:16:39.104 [2024-07-24 17:17:25.107178] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69086) is not found. Dropping the request. 00:16:39.104 [2024-07-24 17:17:25.107201] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69086) is not found. Dropping the request. 00:16:39.104 [2024-07-24 17:17:25.107233] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69086) is not found. Dropping the request. 00:16:39.363 [2024-07-24 17:17:25.356495] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:16:39.363 17:17:25 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:16:39.363 17:17:25 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:16:39.363 17:17:25 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:16:39.363 17:17:25 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:39.363 17:17:25 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:39.363 17:17:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:39.363 ************************************ 00:16:39.363 START TEST bdev_nvme_reset_stuck_adm_cmd 00:16:39.363 ************************************ 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:16:39.363 * Looking for test storage... 00:16:39.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=69369 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 69369 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 69369 ']' 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:39.363 17:17:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:39.622 [2024-07-24 17:17:25.661724] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:16:39.622 [2024-07-24 17:17:25.661900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69369 ] 00:16:39.622 [2024-07-24 17:17:25.851468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.881 [2024-07-24 17:17:26.091411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.881 [2024-07-24 17:17:26.091555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.881 [2024-07-24 17:17:26.091738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.881 [2024-07-24 17:17:26.091749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.815 17:17:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:40.815 17:17:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:16:40.815 17:17:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:16:40.815 17:17:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.815 17:17:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:40.815 nvme0n1 00:16:40.815 17:17:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.815 17:17:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:16:40.815 17:17:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_eHCox.txt 00:16:40.815 17:17:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:16:40.815 17:17:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.815 17:17:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:40.815 true 00:16:40.815 17:17:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.815 17:17:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:16:40.815 17:17:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721841446 00:16:40.815 17:17:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=69397 00:16:40.815 17:17:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:16:40.815 17:17:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:40.815 17:17:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:16:43.342 17:17:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:16:43.342 17:17:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.342 17:17:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:43.342 [2024-07-24 17:17:28.979981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:16:43.342 [2024-07-24 17:17:28.980620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:43.342 [2024-07-24 17:17:28.980715] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:43.342 [2024-07-24 17:17:28.980749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.342 [2024-07-24 17:17:28.983320] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:43.342 17:17:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.342 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 69397 00:16:43.342 17:17:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 69397 00:16:43.342 17:17:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 69397 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=3 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_eHCox.txt 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_eHCox.txt 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 69369 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 69369 ']' 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 69369 00:16:43.342 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:16:43.343 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:43.343 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69369 00:16:43.343 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:43.343 killing process with pid 69369 00:16:43.343 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:43.343 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69369' 00:16:43.343 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 69369 00:16:43.343 17:17:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 69369 00:16:45.270 17:17:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:16:45.270 17:17:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:16:45.270 ************************************ 00:16:45.270 END TEST bdev_nvme_reset_stuck_adm_cmd 00:16:45.270 ************************************ 00:16:45.270 00:16:45.270 real 0m5.931s 00:16:45.270 user 0m20.167s 00:16:45.270 sys 0m0.755s 00:16:45.270 17:17:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:45.270 17:17:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:45.270 17:17:31 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:16:45.270 17:17:31 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:16:45.270 17:17:31 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:45.270 17:17:31 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:45.270 17:17:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:45.270 ************************************ 00:16:45.270 START TEST nvme_fio 00:16:45.270 ************************************ 00:16:45.270 17:17:31 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:16:45.270 17:17:31 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:45.270 17:17:31 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:16:45.270 17:17:31 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:16:45.270 17:17:31 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:16:45.270 17:17:31 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:16:45.270 17:17:31 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:45.270 17:17:31 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:45.270 17:17:31 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:16:45.270 17:17:31 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:16:45.270 17:17:31 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:45.270 17:17:31 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:16:45.270 17:17:31 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:16:45.270 17:17:31 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:16:45.270 17:17:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:45.270 17:17:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:16:45.529 17:17:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:45.529 17:17:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:16:45.789 17:17:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:16:45.789 17:17:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:16:45.789 17:17:32 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:16:45.789 17:17:32 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:45.789 17:17:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:45.789 17:17:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:45.789 17:17:32 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:45.789 17:17:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:16:45.789 17:17:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:45.789 17:17:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:45.789 17:17:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:45.789 17:17:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:16:45.789 17:17:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:46.048 17:17:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:46.048 17:17:32 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:46.048 17:17:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:16:46.048 17:17:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:46.048 17:17:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:16:46.048 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:46.048 fio-3.35 00:16:46.048 Starting 1 thread 00:16:49.332 00:16:49.332 test: (groupid=0, jobs=1): err= 0: pid=69551: Wed Jul 24 17:17:35 2024 00:16:49.332 read: IOPS=16.6k, BW=64.7MiB/s (67.9MB/s)(130MiB/2001msec) 00:16:49.332 slat (nsec): min=4599, max=57483, avg=6467.46, stdev=2496.03 00:16:49.332 clat (usec): min=278, max=9562, avg=3835.23, stdev=646.10 00:16:49.332 lat (usec): min=283, max=9573, avg=3841.70, stdev=647.22 00:16:49.332 clat percentiles (usec): 00:16:49.332 | 1.00th=[ 3195], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3490], 00:16:49.332 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3720], 00:16:49.332 | 70.00th=[ 3818], 80.00th=[ 4146], 90.00th=[ 4424], 95.00th=[ 4817], 00:16:49.332 | 99.00th=[ 6783], 99.50th=[ 7898], 99.90th=[ 9241], 99.95th=[ 9372], 00:16:49.332 | 99.99th=[ 9503] 00:16:49.332 bw ( KiB/s): min=60784, max=69216, per=97.56%, avg=64674.67, stdev=4253.49, samples=3 00:16:49.332 iops : min=15196, max=17304, avg=16168.67, stdev=1063.37, samples=3 00:16:49.332 write: IOPS=16.6k, BW=64.8MiB/s (68.0MB/s)(130MiB/2001msec); 0 zone resets 00:16:49.332 slat (nsec): min=4900, max=82627, avg=6662.27, stdev=2528.77 00:16:49.332 clat (usec): min=239, max=9616, avg=3850.42, stdev=655.83 00:16:49.332 lat (usec): min=244, max=9627, avg=3857.08, stdev=656.87 00:16:49.332 clat percentiles (usec): 00:16:49.332 | 1.00th=[ 3195], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3490], 00:16:49.332 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3752], 00:16:49.332 | 70.00th=[ 3851], 80.00th=[ 4146], 90.00th=[ 4424], 95.00th=[ 4883], 00:16:49.332 | 99.00th=[ 6915], 99.50th=[ 8029], 99.90th=[ 9372], 99.95th=[ 9503], 00:16:49.332 | 99.99th=[ 9503] 00:16:49.332 bw ( KiB/s): min=60056, max=68896, per=97.02%, avg=64413.33, stdev=4421.33, samples=3 00:16:49.332 iops : min=15014, max=17224, avg=16103.33, stdev=1105.33, samples=3 00:16:49.332 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02% 00:16:49.332 lat (msec) : 2=0.05%, 4=76.42%, 10=23.48% 00:16:49.332 cpu : usr=98.90%, sys=0.20%, ctx=13, majf=0, minf=608 00:16:49.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:49.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:49.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:49.332 issued rwts: total=33161,33214,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:49.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:49.332 00:16:49.332 Run status group 0 (all jobs): 00:16:49.332 READ: bw=64.7MiB/s (67.9MB/s), 64.7MiB/s-64.7MiB/s (67.9MB/s-67.9MB/s), io=130MiB (136MB), run=2001-2001msec 00:16:49.332 WRITE: bw=64.8MiB/s (68.0MB/s), 64.8MiB/s-64.8MiB/s (68.0MB/s-68.0MB/s), io=130MiB (136MB), run=2001-2001msec 00:16:49.590 ----------------------------------------------------- 00:16:49.590 Suppressions used: 00:16:49.590 count bytes template 00:16:49.590 1 32 /usr/src/fio/parse.c 00:16:49.590 1 8 libtcmalloc_minimal.so 00:16:49.590 ----------------------------------------------------- 00:16:49.590 00:16:49.590 17:17:35 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:16:49.590 17:17:35 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:16:49.590 17:17:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:16:49.590 17:17:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:16:49.849 17:17:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:16:49.849 17:17:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:16:50.108 17:17:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:16:50.108 17:17:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:16:50.108 17:17:36 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:16:50.108 17:17:36 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:50.108 17:17:36 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:50.108 17:17:36 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:50.108 17:17:36 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:50.108 17:17:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:16:50.108 17:17:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:50.108 17:17:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:50.108 17:17:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:50.108 17:17:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:16:50.108 17:17:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:50.108 17:17:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:50.108 17:17:36 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:50.108 17:17:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:16:50.108 17:17:36 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:50.108 17:17:36 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:16:50.366 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:50.366 fio-3.35 00:16:50.366 Starting 1 thread 00:16:53.648 00:16:53.648 test: (groupid=0, jobs=1): err= 0: pid=69612: Wed Jul 24 17:17:39 2024 00:16:53.648 read: IOPS=16.7k, BW=65.2MiB/s (68.4MB/s)(130MiB/2001msec) 00:16:53.648 slat (nsec): min=4717, max=47914, avg=6428.77, stdev=1916.20 00:16:53.648 clat (usec): min=279, max=9900, avg=3806.36, stdev=363.23 00:16:53.648 lat (usec): min=285, max=9948, avg=3812.79, stdev=363.75 00:16:53.648 clat percentiles (usec): 00:16:53.648 | 1.00th=[ 3163], 5.00th=[ 3392], 10.00th=[ 3490], 20.00th=[ 3589], 00:16:53.648 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3818], 00:16:53.648 | 70.00th=[ 3851], 80.00th=[ 3949], 90.00th=[ 4178], 95.00th=[ 4555], 00:16:53.648 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 6390], 99.95th=[ 8160], 00:16:53.648 | 99.99th=[ 9634] 00:16:53.648 bw ( KiB/s): min=60894, max=69568, per=98.40%, avg=65703.33, stdev=4413.49, samples=3 00:16:53.648 iops : min=15223, max=17392, avg=16425.67, stdev=1103.64, samples=3 00:16:53.648 write: IOPS=16.7k, BW=65.3MiB/s (68.5MB/s)(131MiB/2001msec); 0 zone resets 00:16:53.648 slat (nsec): min=4810, max=49287, avg=6731.41, stdev=2005.34 00:16:53.648 clat (usec): min=289, max=9729, avg=3820.30, stdev=367.57 00:16:53.648 lat (usec): min=295, max=9760, avg=3827.03, stdev=368.11 00:16:53.648 clat percentiles (usec): 00:16:53.648 | 1.00th=[ 3195], 5.00th=[ 3425], 10.00th=[ 3523], 20.00th=[ 3621], 00:16:53.648 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3818], 00:16:53.648 | 70.00th=[ 3884], 80.00th=[ 3949], 90.00th=[ 4228], 95.00th=[ 4555], 00:16:53.648 | 99.00th=[ 4883], 99.50th=[ 5014], 99.90th=[ 6783], 99.95th=[ 8356], 00:16:53.648 | 99.99th=[ 9372] 00:16:53.648 bw ( KiB/s): min=61277, max=69320, per=97.90%, avg=65511.00, stdev=4038.31, samples=3 00:16:53.648 iops : min=15319, max=17330, avg=16377.67, stdev=1009.71, samples=3 00:16:53.648 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:16:53.648 lat (msec) : 2=0.06%, 4=83.91%, 10=15.98% 00:16:53.648 cpu : usr=98.90%, sys=0.25%, ctx=4, majf=0, minf=607 00:16:53.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:53.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:53.648 issued rwts: total=33403,33474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:53.648 00:16:53.648 Run status group 0 (all jobs): 00:16:53.648 READ: bw=65.2MiB/s (68.4MB/s), 65.2MiB/s-65.2MiB/s (68.4MB/s-68.4MB/s), io=130MiB (137MB), run=2001-2001msec 00:16:53.648 WRITE: bw=65.3MiB/s (68.5MB/s), 65.3MiB/s-65.3MiB/s (68.5MB/s-68.5MB/s), io=131MiB (137MB), run=2001-2001msec 00:16:53.648 ----------------------------------------------------- 00:16:53.648 Suppressions used: 00:16:53.648 count bytes template 00:16:53.648 1 32 /usr/src/fio/parse.c 00:16:53.648 1 8 libtcmalloc_minimal.so 00:16:53.648 ----------------------------------------------------- 00:16:53.648 00:16:53.906 17:17:39 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:16:53.906 17:17:39 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:16:53.906 17:17:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:16:53.906 17:17:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:16:54.164 17:17:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:16:54.164 17:17:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:16:54.423 17:17:40 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:16:54.423 17:17:40 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:16:54.423 17:17:40 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:16:54.423 17:17:40 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:54.423 17:17:40 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:54.423 17:17:40 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:54.423 17:17:40 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:54.423 17:17:40 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:16:54.423 17:17:40 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:54.423 17:17:40 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:54.423 17:17:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:54.423 17:17:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:16:54.423 17:17:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:54.423 17:17:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:54.423 17:17:40 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:54.423 17:17:40 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:16:54.423 17:17:40 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:54.423 17:17:40 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:16:54.423 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:54.423 fio-3.35 00:16:54.423 Starting 1 thread 00:16:58.619 00:16:58.619 test: (groupid=0, jobs=1): err= 0: pid=69673: Wed Jul 24 17:17:44 2024 00:16:58.619 read: IOPS=17.6k, BW=68.7MiB/s (72.0MB/s)(137MiB/2001msec) 00:16:58.619 slat (nsec): min=4749, max=47819, avg=6071.86, stdev=1706.18 00:16:58.619 clat (usec): min=276, max=7943, avg=3620.94, stdev=330.41 00:16:58.619 lat (usec): min=282, max=7981, avg=3627.01, stdev=330.81 00:16:58.619 clat percentiles (usec): 00:16:58.619 | 1.00th=[ 3228], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3425], 00:16:58.619 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3589], 00:16:58.619 | 70.00th=[ 3654], 80.00th=[ 3720], 90.00th=[ 4015], 95.00th=[ 4359], 00:16:58.619 | 99.00th=[ 4621], 99.50th=[ 4686], 99.90th=[ 6325], 99.95th=[ 7111], 00:16:58.619 | 99.99th=[ 7832] 00:16:58.619 bw ( KiB/s): min=63632, max=72480, per=98.82%, avg=69490.67, stdev=5074.11, samples=3 00:16:58.619 iops : min=15908, max=18120, avg=17372.67, stdev=1268.53, samples=3 00:16:58.619 write: IOPS=17.6k, BW=68.7MiB/s (72.1MB/s)(137MiB/2001msec); 0 zone resets 00:16:58.619 slat (nsec): min=4920, max=95592, avg=6290.12, stdev=1866.10 00:16:58.619 clat (usec): min=247, max=7858, avg=3631.77, stdev=333.40 00:16:58.619 lat (usec): min=253, max=7872, avg=3638.06, stdev=333.83 00:16:58.619 clat percentiles (usec): 00:16:58.619 | 1.00th=[ 3228], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3425], 00:16:58.619 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3589], 00:16:58.619 | 70.00th=[ 3654], 80.00th=[ 3720], 90.00th=[ 4080], 95.00th=[ 4359], 00:16:58.619 | 99.00th=[ 4621], 99.50th=[ 4686], 99.90th=[ 6390], 99.95th=[ 7046], 00:16:58.619 | 99.99th=[ 7635] 00:16:58.619 bw ( KiB/s): min=63928, max=72464, per=98.73%, avg=69466.67, stdev=4802.04, samples=3 00:16:58.619 iops : min=15982, max=18116, avg=17366.67, stdev=1200.51, samples=3 00:16:58.619 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:16:58.619 lat (msec) : 2=0.09%, 4=89.41%, 10=10.47% 00:16:58.619 cpu : usr=99.00%, sys=0.10%, ctx=5, majf=0, minf=607 00:16:58.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:58.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:58.619 issued rwts: total=35176,35199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.619 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:58.619 00:16:58.619 Run status group 0 (all jobs): 00:16:58.619 READ: bw=68.7MiB/s (72.0MB/s), 68.7MiB/s-68.7MiB/s (72.0MB/s-72.0MB/s), io=137MiB (144MB), run=2001-2001msec 00:16:58.619 WRITE: bw=68.7MiB/s (72.1MB/s), 68.7MiB/s-68.7MiB/s (72.1MB/s-72.1MB/s), io=137MiB (144MB), run=2001-2001msec 00:16:58.619 ----------------------------------------------------- 00:16:58.619 Suppressions used: 00:16:58.619 count bytes template 00:16:58.619 1 32 /usr/src/fio/parse.c 00:16:58.619 1 8 libtcmalloc_minimal.so 00:16:58.619 ----------------------------------------------------- 00:16:58.619 00:16:58.619 17:17:44 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:16:58.619 17:17:44 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:16:58.619 17:17:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:16:58.619 17:17:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:16:58.619 17:17:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:16:58.619 17:17:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:16:58.619 17:17:44 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:16:58.619 17:17:44 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:16:58.619 17:17:44 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:16:58.619 17:17:44 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:58.619 17:17:44 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:58.619 17:17:44 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:58.619 17:17:44 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:58.619 17:17:44 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:16:58.619 17:17:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:58.619 17:17:44 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:58.619 17:17:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:58.619 17:17:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:16:58.619 17:17:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:58.619 17:17:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:58.619 17:17:44 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:58.619 17:17:44 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:16:58.619 17:17:44 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:58.619 17:17:44 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:16:58.876 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:58.876 fio-3.35 00:16:58.876 Starting 1 thread 00:17:02.150 00:17:02.150 test: (groupid=0, jobs=1): err= 0: pid=69739: Wed Jul 24 17:17:47 2024 00:17:02.150 read: IOPS=15.1k, BW=58.9MiB/s (61.8MB/s)(118MiB/2006msec) 00:17:02.150 slat (nsec): min=4736, max=95712, avg=6759.73, stdev=2071.82 00:17:02.150 clat (usec): min=1383, max=10488, avg=3515.05, stdev=989.92 00:17:02.150 lat (usec): min=1388, max=10495, avg=3521.81, stdev=990.55 00:17:02.150 clat percentiles (usec): 00:17:02.150 | 1.00th=[ 1729], 5.00th=[ 1876], 10.00th=[ 2024], 20.00th=[ 2409], 00:17:02.150 | 30.00th=[ 3261], 40.00th=[ 3458], 50.00th=[ 3556], 60.00th=[ 3687], 00:17:02.150 | 70.00th=[ 4178], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4686], 00:17:02.150 | 99.00th=[ 6259], 99.50th=[ 6980], 99.90th=[ 8291], 99.95th=[ 9241], 00:17:02.150 | 99.99th=[10290] 00:17:02.150 bw ( KiB/s): min=52992, max=65936, per=100.00%, avg=60460.00, stdev=5756.61, samples=4 00:17:02.150 iops : min=13248, max=16484, avg=15115.00, stdev=1439.15, samples=4 00:17:02.150 write: IOPS=15.1k, BW=59.0MiB/s (61.9MB/s)(118MiB/2006msec); 0 zone resets 00:17:02.150 slat (nsec): min=4904, max=73047, avg=6962.06, stdev=2028.27 00:17:02.150 clat (usec): min=1478, max=26503, avg=4930.98, stdev=3274.46 00:17:02.150 lat (usec): min=1484, max=26511, avg=4937.94, stdev=3274.62 00:17:02.150 clat percentiles (usec): 00:17:02.150 | 1.00th=[ 1795], 5.00th=[ 2008], 10.00th=[ 2311], 20.00th=[ 3326], 00:17:02.150 | 30.00th=[ 3490], 40.00th=[ 3621], 50.00th=[ 3916], 60.00th=[ 4359], 00:17:02.150 | 70.00th=[ 4555], 80.00th=[ 5080], 90.00th=[ 9634], 95.00th=[12256], 00:17:02.150 | 99.00th=[18220], 99.50th=[19792], 99.90th=[23200], 99.95th=[25822], 00:17:02.150 | 99.99th=[26346] 00:17:02.150 bw ( KiB/s): min=54240, max=65608, per=99.98%, avg=60400.00, stdev=5391.24, samples=4 00:17:02.150 iops : min=13560, max=16402, avg=15100.00, stdev=1347.81, samples=4 00:17:02.150 lat (msec) : 2=7.07%, 4=51.86%, 10=36.22%, 20=4.64%, 50=0.21% 00:17:02.150 cpu : usr=98.90%, sys=0.15%, ctx=14, majf=0, minf=605 00:17:02.150 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:02.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:02.150 issued rwts: total=30259,30298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:02.150 00:17:02.150 Run status group 0 (all jobs): 00:17:02.150 READ: bw=58.9MiB/s (61.8MB/s), 58.9MiB/s-58.9MiB/s (61.8MB/s-61.8MB/s), io=118MiB (124MB), run=2006-2006msec 00:17:02.150 WRITE: bw=59.0MiB/s (61.9MB/s), 59.0MiB/s-59.0MiB/s (61.9MB/s-61.9MB/s), io=118MiB (124MB), run=2006-2006msec 00:17:02.150 ----------------------------------------------------- 00:17:02.150 Suppressions used: 00:17:02.150 count bytes template 00:17:02.150 1 32 /usr/src/fio/parse.c 00:17:02.150 1 8 libtcmalloc_minimal.so 00:17:02.150 ----------------------------------------------------- 00:17:02.150 00:17:02.150 ************************************ 00:17:02.150 END TEST nvme_fio 00:17:02.150 ************************************ 00:17:02.150 17:17:48 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:17:02.150 17:17:48 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:17:02.150 00:17:02.150 real 0m16.836s 00:17:02.150 user 0m13.523s 00:17:02.150 sys 0m1.719s 00:17:02.150 17:17:48 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:02.150 17:17:48 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:17:02.150 ************************************ 00:17:02.150 END TEST nvme 00:17:02.150 ************************************ 00:17:02.150 00:17:02.150 real 1m32.127s 00:17:02.150 user 3m44.478s 00:17:02.150 sys 0m15.583s 00:17:02.150 17:17:48 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:02.150 17:17:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:02.150 17:17:48 -- spdk/autotest.sh@221 -- # [[ 0 -eq 1 ]] 00:17:02.151 17:17:48 -- spdk/autotest.sh@225 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:17:02.151 17:17:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:02.151 17:17:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:02.151 17:17:48 -- common/autotest_common.sh@10 -- # set +x 00:17:02.151 ************************************ 00:17:02.151 START TEST nvme_scc 00:17:02.151 ************************************ 00:17:02.151 17:17:48 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:17:02.151 * Looking for test storage... 00:17:02.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:17:02.151 17:17:48 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:17:02.151 17:17:48 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:17:02.408 17:17:48 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:17:02.408 17:17:48 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:02.408 17:17:48 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:02.408 17:17:48 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.408 17:17:48 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.408 17:17:48 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.408 17:17:48 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.408 17:17:48 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.408 17:17:48 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.408 17:17:48 nvme_scc -- paths/export.sh@5 -- # export PATH 00:17:02.409 17:17:48 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.409 17:17:48 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:17:02.409 17:17:48 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:17:02.409 17:17:48 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:17:02.409 17:17:48 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:17:02.409 17:17:48 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:17:02.409 17:17:48 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:17:02.409 17:17:48 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:17:02.409 17:17:48 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:17:02.409 17:17:48 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:17:02.409 17:17:48 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:02.409 17:17:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:17:02.409 17:17:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:17:02.409 17:17:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:17:02.409 17:17:48 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:02.666 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:02.924 Waiting for block devices as requested 00:17:02.924 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:02.924 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:02.924 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:17:03.181 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:17:08.449 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:17:08.449 17:17:54 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:17:08.449 17:17:54 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:17:08.449 17:17:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:17:08.449 17:17:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:17:08.449 17:17:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:17:08.449 17:17:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:17:08.449 17:17:54 nvme_scc -- scripts/common.sh@15 -- # local i 00:17:08.449 17:17:54 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:17:08.449 17:17:54 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:17:08.449 17:17:54 nvme_scc -- scripts/common.sh@24 -- # return 0 00:17:08.449 17:17:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:08.450 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.451 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:08.452 17:17:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:17:08.453 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:17:08.454 17:17:54 nvme_scc -- scripts/common.sh@15 -- # local i 00:17:08.454 17:17:54 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:17:08.454 17:17:54 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:17:08.454 17:17:54 nvme_scc -- scripts/common.sh@24 -- # return 0 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.454 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:17:08.455 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.456 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:17:08.457 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:17:08.458 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:17:08.459 17:17:54 nvme_scc -- scripts/common.sh@15 -- # local i 00:17:08.459 17:17:54 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:17:08.459 17:17:54 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:17:08.459 17:17:54 nvme_scc -- scripts/common.sh@24 -- # return 0 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.459 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.460 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:08.461 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.462 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.463 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:17:08.725 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:17:08.726 17:17:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:08.727 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:17:08.728 17:17:54 nvme_scc -- scripts/common.sh@15 -- # local i 00:17:08.728 17:17:54 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:17:08.728 17:17:54 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:17:08.728 17:17:54 nvme_scc -- scripts/common.sh@24 -- # return 0 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:17:08.728 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.729 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.730 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:17:08.731 17:17:54 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:17:08.731 17:17:54 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:17:08.732 17:17:54 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:17:08.732 17:17:54 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:17:08.732 17:17:54 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:17:08.732 17:17:54 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:09.297 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:09.863 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:09.863 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:09.863 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:10.120 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:10.120 17:17:56 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:17:10.120 17:17:56 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:10.120 17:17:56 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:10.120 17:17:56 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:17:10.120 ************************************ 00:17:10.120 START TEST nvme_simple_copy 00:17:10.120 ************************************ 00:17:10.120 17:17:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:17:10.378 Initializing NVMe Controllers 00:17:10.378 Attaching to 0000:00:10.0 00:17:10.378 Controller supports SCC. Attached to 0000:00:10.0 00:17:10.378 Namespace ID: 1 size: 6GB 00:17:10.378 Initialization complete. 00:17:10.378 00:17:10.378 Controller QEMU NVMe Ctrl (12340 ) 00:17:10.378 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:17:10.378 Namespace Block Size:4096 00:17:10.378 Writing LBAs 0 to 63 with Random Data 00:17:10.378 Copied LBAs from 0 - 63 to the Destination LBA 256 00:17:10.378 LBAs matching Written Data: 64 00:17:10.378 00:17:10.378 real 0m0.331s 00:17:10.378 user 0m0.132s 00:17:10.378 sys 0m0.096s 00:17:10.378 17:17:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:10.378 17:17:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:17:10.378 ************************************ 00:17:10.378 END TEST nvme_simple_copy 00:17:10.378 ************************************ 00:17:10.378 00:17:10.378 real 0m8.251s 00:17:10.378 user 0m1.409s 00:17:10.378 sys 0m1.772s 00:17:10.378 17:17:56 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:10.378 17:17:56 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:17:10.378 ************************************ 00:17:10.378 END TEST nvme_scc 00:17:10.378 ************************************ 00:17:10.378 17:17:56 -- spdk/autotest.sh@227 -- # [[ 0 -eq 1 ]] 00:17:10.378 17:17:56 -- spdk/autotest.sh@230 -- # [[ 0 -eq 1 ]] 00:17:10.378 17:17:56 -- spdk/autotest.sh@233 -- # [[ '' -eq 1 ]] 00:17:10.378 17:17:56 -- spdk/autotest.sh@236 -- # [[ 1 -eq 1 ]] 00:17:10.378 17:17:56 -- spdk/autotest.sh@237 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:17:10.378 17:17:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:10.378 17:17:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:10.378 17:17:56 -- common/autotest_common.sh@10 -- # set +x 00:17:10.378 ************************************ 00:17:10.378 START TEST nvme_fdp 00:17:10.378 ************************************ 00:17:10.378 17:17:56 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:17:10.637 * Looking for test storage... 00:17:10.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:17:10.637 17:17:56 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:17:10.637 17:17:56 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:17:10.637 17:17:56 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:17:10.637 17:17:56 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:10.637 17:17:56 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:10.637 17:17:56 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.637 17:17:56 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.637 17:17:56 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.637 17:17:56 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.637 17:17:56 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.637 17:17:56 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.637 17:17:56 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:17:10.637 17:17:56 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.637 17:17:56 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:17:10.637 17:17:56 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:17:10.637 17:17:56 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:17:10.637 17:17:56 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:17:10.637 17:17:56 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:17:10.637 17:17:56 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:17:10.637 17:17:56 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:17:10.637 17:17:56 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:17:10.637 17:17:56 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:17:10.637 17:17:56 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:10.637 17:17:56 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:10.895 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:11.153 Waiting for block devices as requested 00:17:11.153 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:11.411 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:11.411 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:17:11.411 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:17:16.709 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:17:16.709 17:18:02 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:17:16.709 17:18:02 nvme_fdp -- scripts/common.sh@15 -- # local i 00:17:16.709 17:18:02 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:17:16.709 17:18:02 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:17:16.709 17:18:02 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:17:16.709 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.710 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:16.711 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:17:16.712 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:16.713 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:17:16.714 17:18:02 nvme_fdp -- scripts/common.sh@15 -- # local i 00:17:16.714 17:18:02 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:17:16.714 17:18:02 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:17:16.714 17:18:02 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.714 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.715 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.716 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.717 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:17:16.718 17:18:02 nvme_fdp -- scripts/common.sh@15 -- # local i 00:17:16.718 17:18:02 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:17:16.718 17:18:02 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:17:16.718 17:18:02 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:17:16.718 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:17:16.719 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.720 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:17:16.721 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:17:16.984 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:17:16.985 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:16.986 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:16.987 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:16.987 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:16.987 17:18:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:16.987 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:17:16.987 17:18:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:17:16.987 17:18:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:17:16.987 17:18:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:17:16.987 17:18:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:17:16.987 17:18:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:17:16.987 17:18:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:16.987 17:18:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:17:16.987 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.988 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.989 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:17:16.990 17:18:03 nvme_fdp -- scripts/common.sh@15 -- # local i 00:17:16.990 17:18:03 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:17:16.990 17:18:03 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:17:16.990 17:18:03 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.990 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.991 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.992 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:17:16.993 17:18:03 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:17:16.993 17:18:03 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:17:16.993 17:18:03 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:17:16.993 17:18:03 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:17:16.993 17:18:03 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:17.574 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:18.142 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:18.142 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:18.142 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:18.142 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:18.400 17:18:04 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:17:18.400 17:18:04 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:18.400 17:18:04 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:18.400 17:18:04 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:17:18.400 ************************************ 00:17:18.400 START TEST nvme_flexible_data_placement 00:17:18.400 ************************************ 00:17:18.400 17:18:04 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:17:18.659 Initializing NVMe Controllers 00:17:18.659 Attaching to 0000:00:13.0 00:17:18.659 Controller supports FDP Attached to 0000:00:13.0 00:17:18.659 Namespace ID: 1 Endurance Group ID: 1 00:17:18.659 Initialization complete. 00:17:18.659 00:17:18.659 ================================== 00:17:18.659 == FDP tests for Namespace: #01 == 00:17:18.659 ================================== 00:17:18.659 00:17:18.659 Get Feature: FDP: 00:17:18.659 ================= 00:17:18.659 Enabled: Yes 00:17:18.659 FDP configuration Index: 0 00:17:18.659 00:17:18.659 FDP configurations log page 00:17:18.659 =========================== 00:17:18.659 Number of FDP configurations: 1 00:17:18.659 Version: 0 00:17:18.659 Size: 112 00:17:18.659 FDP Configuration Descriptor: 0 00:17:18.659 Descriptor Size: 96 00:17:18.659 Reclaim Group Identifier format: 2 00:17:18.659 FDP Volatile Write Cache: Not Present 00:17:18.659 FDP Configuration: Valid 00:17:18.659 Vendor Specific Size: 0 00:17:18.659 Number of Reclaim Groups: 2 00:17:18.659 Number of Recalim Unit Handles: 8 00:17:18.659 Max Placement Identifiers: 128 00:17:18.659 Number of Namespaces Suppprted: 256 00:17:18.659 Reclaim unit Nominal Size: 6000000 bytes 00:17:18.659 Estimated Reclaim Unit Time Limit: Not Reported 00:17:18.659 RUH Desc #000: RUH Type: Initially Isolated 00:17:18.659 RUH Desc #001: RUH Type: Initially Isolated 00:17:18.659 RUH Desc #002: RUH Type: Initially Isolated 00:17:18.659 RUH Desc #003: RUH Type: Initially Isolated 00:17:18.659 RUH Desc #004: RUH Type: Initially Isolated 00:17:18.659 RUH Desc #005: RUH Type: Initially Isolated 00:17:18.659 RUH Desc #006: RUH Type: Initially Isolated 00:17:18.659 RUH Desc #007: RUH Type: Initially Isolated 00:17:18.659 00:17:18.659 FDP reclaim unit handle usage log page 00:17:18.659 ====================================== 00:17:18.659 Number of Reclaim Unit Handles: 8 00:17:18.659 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:17:18.659 RUH Usage Desc #001: RUH Attributes: Unused 00:17:18.659 RUH Usage Desc #002: RUH Attributes: Unused 00:17:18.659 RUH Usage Desc #003: RUH Attributes: Unused 00:17:18.659 RUH Usage Desc #004: RUH Attributes: Unused 00:17:18.659 RUH Usage Desc #005: RUH Attributes: Unused 00:17:18.659 RUH Usage Desc #006: RUH Attributes: Unused 00:17:18.659 RUH Usage Desc #007: RUH Attributes: Unused 00:17:18.659 00:17:18.659 FDP statistics log page 00:17:18.659 ======================= 00:17:18.659 Host bytes with metadata written: 905400320 00:17:18.659 Media bytes with metadata written: 905519104 00:17:18.659 Media bytes erased: 0 00:17:18.659 00:17:18.659 FDP Reclaim unit handle status 00:17:18.659 ============================== 00:17:18.659 Number of RUHS descriptors: 2 00:17:18.659 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000008b 00:17:18.659 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:17:18.659 00:17:18.659 FDP write on placement id: 0 success 00:17:18.659 00:17:18.659 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:17:18.659 00:17:18.659 IO mgmt send: RUH update for Placement ID: #0 Success 00:17:18.659 00:17:18.659 Get Feature: FDP Events for Placement handle: #0 00:17:18.659 ======================== 00:17:18.659 Number of FDP Events: 6 00:17:18.659 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:17:18.659 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:17:18.659 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:17:18.659 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:17:18.659 FDP Event: #4 Type: Media Reallocated Enabled: No 00:17:18.659 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:17:18.659 00:17:18.659 FDP events log page 00:17:18.659 =================== 00:17:18.659 Number of FDP events: 1 00:17:18.659 FDP Event #0: 00:17:18.659 Event Type: RU Not Written to Capacity 00:17:18.659 Placement Identifier: Valid 00:17:18.659 NSID: Valid 00:17:18.659 Location: Valid 00:17:18.659 Placement Identifier: 0 00:17:18.659 Event Timestamp: 8 00:17:18.659 Namespace Identifier: 1 00:17:18.659 Reclaim Group Identifier: 0 00:17:18.659 Reclaim Unit Handle Identifier: 0 00:17:18.659 00:17:18.659 FDP test passed 00:17:18.659 ************************************ 00:17:18.659 END TEST nvme_flexible_data_placement 00:17:18.659 ************************************ 00:17:18.659 00:17:18.659 real 0m0.301s 00:17:18.659 user 0m0.100s 00:17:18.659 sys 0m0.099s 00:17:18.659 17:18:04 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:18.659 17:18:04 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:17:18.659 ************************************ 00:17:18.659 END TEST nvme_fdp 00:17:18.659 ************************************ 00:17:18.659 00:17:18.659 real 0m8.186s 00:17:18.659 user 0m1.278s 00:17:18.659 sys 0m1.783s 00:17:18.659 17:18:04 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:18.659 17:18:04 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:17:18.659 17:18:04 -- spdk/autotest.sh@240 -- # [[ '' -eq 1 ]] 00:17:18.659 17:18:04 -- spdk/autotest.sh@244 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:17:18.659 17:18:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:18.659 17:18:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:18.659 17:18:04 -- common/autotest_common.sh@10 -- # set +x 00:17:18.659 ************************************ 00:17:18.660 START TEST nvme_rpc 00:17:18.660 ************************************ 00:17:18.660 17:18:04 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:17:18.918 * Looking for test storage... 00:17:18.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:17:18.918 17:18:04 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:18.918 17:18:04 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:17:18.918 17:18:04 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:17:18.918 17:18:04 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:17:18.918 17:18:04 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:17:18.918 17:18:04 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:17:18.918 17:18:04 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:17:18.918 17:18:04 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:17:18.918 17:18:04 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:18.918 17:18:04 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:18.918 17:18:04 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:17:18.918 17:18:04 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:17:18.918 17:18:04 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:17:18.918 17:18:04 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:17:18.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.918 17:18:04 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:17:18.918 17:18:04 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=71078 00:17:18.918 17:18:04 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:17:18.918 17:18:04 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:17:18.918 17:18:04 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 71078 00:17:18.919 17:18:04 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 71078 ']' 00:17:18.919 17:18:04 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.919 17:18:04 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:18.919 17:18:04 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.919 17:18:04 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:18.919 17:18:04 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.919 [2024-07-24 17:18:05.110932] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:17:18.919 [2024-07-24 17:18:05.111423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71078 ] 00:17:19.177 [2024-07-24 17:18:05.292026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:19.436 [2024-07-24 17:18:05.573909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.436 [2024-07-24 17:18:05.573913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.368 17:18:06 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:20.368 17:18:06 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:20.368 17:18:06 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:17:20.626 Nvme0n1 00:17:20.626 17:18:06 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:17:20.626 17:18:06 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:17:20.884 request: 00:17:20.884 { 00:17:20.884 "bdev_name": "Nvme0n1", 00:17:20.884 "filename": "non_existing_file", 00:17:20.884 "method": "bdev_nvme_apply_firmware", 00:17:20.884 "req_id": 1 00:17:20.884 } 00:17:20.884 Got JSON-RPC error response 00:17:20.884 response: 00:17:20.884 { 00:17:20.884 "code": -32603, 00:17:20.884 "message": "open file failed." 00:17:20.884 } 00:17:20.884 17:18:06 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:17:20.884 17:18:06 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:17:20.884 17:18:06 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:17:21.143 17:18:07 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:21.143 17:18:07 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 71078 00:17:21.143 17:18:07 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 71078 ']' 00:17:21.143 17:18:07 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 71078 00:17:21.143 17:18:07 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:17:21.143 17:18:07 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:21.143 17:18:07 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71078 00:17:21.143 killing process with pid 71078 00:17:21.143 17:18:07 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:21.143 17:18:07 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:21.143 17:18:07 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71078' 00:17:21.143 17:18:07 nvme_rpc -- common/autotest_common.sh@969 -- # kill 71078 00:17:21.143 17:18:07 nvme_rpc -- common/autotest_common.sh@974 -- # wait 71078 00:17:23.669 ************************************ 00:17:23.669 END TEST nvme_rpc 00:17:23.669 ************************************ 00:17:23.669 00:17:23.669 real 0m4.548s 00:17:23.669 user 0m8.482s 00:17:23.669 sys 0m0.752s 00:17:23.669 17:18:09 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:23.669 17:18:09 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.669 17:18:09 -- spdk/autotest.sh@245 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:17:23.669 17:18:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:23.669 17:18:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:23.669 17:18:09 -- common/autotest_common.sh@10 -- # set +x 00:17:23.669 ************************************ 00:17:23.669 START TEST nvme_rpc_timeouts 00:17:23.669 ************************************ 00:17:23.669 17:18:09 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:17:23.669 * Looking for test storage... 00:17:23.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:17:23.669 17:18:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:23.669 17:18:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_71154 00:17:23.669 17:18:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_71154 00:17:23.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.669 17:18:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=71183 00:17:23.669 17:18:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:17:23.669 17:18:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 71183 00:17:23.669 17:18:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:17:23.669 17:18:09 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 71183 ']' 00:17:23.669 17:18:09 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.669 17:18:09 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:23.669 17:18:09 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.669 17:18:09 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:23.669 17:18:09 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:17:23.669 [2024-07-24 17:18:09.665846] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:17:23.669 [2024-07-24 17:18:09.666065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71183 ] 00:17:23.669 [2024-07-24 17:18:09.850359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:23.927 [2024-07-24 17:18:10.117953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.927 [2024-07-24 17:18:10.117965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.861 Checking default timeout settings: 00:17:24.861 17:18:10 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:24.861 17:18:10 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:17:24.861 17:18:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:17:24.861 17:18:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:25.151 Making settings changes with rpc: 00:17:25.151 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:17:25.151 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:17:25.409 Check default vs. modified settings: 00:17:25.409 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:17:25.409 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:25.974 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_71154 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_71154 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:17:25.975 Setting action_on_timeout is changed as expected. 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_71154 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_71154 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:17:25.975 Setting timeout_us is changed as expected. 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_71154 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_71154 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:17:25.975 17:18:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:17:25.975 Setting timeout_admin_us is changed as expected. 00:17:25.975 17:18:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:17:25.975 17:18:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:17:25.975 17:18:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:17:25.975 17:18:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:17:25.975 17:18:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_71154 /tmp/settings_modified_71154 00:17:25.975 17:18:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 71183 00:17:25.975 17:18:12 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 71183 ']' 00:17:25.975 17:18:12 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 71183 00:17:25.975 17:18:12 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:17:25.975 17:18:12 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:25.975 17:18:12 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71183 00:17:25.975 killing process with pid 71183 00:17:25.975 17:18:12 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:25.975 17:18:12 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:25.975 17:18:12 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71183' 00:17:25.975 17:18:12 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 71183 00:17:25.975 17:18:12 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 71183 00:17:28.511 RPC TIMEOUT SETTING TEST PASSED. 00:17:28.511 17:18:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:17:28.511 00:17:28.511 real 0m4.737s 00:17:28.511 user 0m8.912s 00:17:28.511 sys 0m0.785s 00:17:28.511 ************************************ 00:17:28.511 END TEST nvme_rpc_timeouts 00:17:28.511 ************************************ 00:17:28.511 17:18:14 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:28.511 17:18:14 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:17:28.511 17:18:14 -- spdk/autotest.sh@247 -- # uname -s 00:17:28.511 17:18:14 -- spdk/autotest.sh@247 -- # '[' Linux = Linux ']' 00:17:28.511 17:18:14 -- spdk/autotest.sh@248 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:17:28.511 17:18:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:28.511 17:18:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:28.511 17:18:14 -- common/autotest_common.sh@10 -- # set +x 00:17:28.511 ************************************ 00:17:28.511 START TEST sw_hotplug 00:17:28.511 ************************************ 00:17:28.511 17:18:14 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:17:28.511 * Looking for test storage... 00:17:28.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:17:28.511 17:18:14 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:28.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:28.771 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:28.771 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:28.771 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:28.771 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:28.771 17:18:14 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:17:28.771 17:18:14 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:17:28.771 17:18:14 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:17:28.771 17:18:14 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@230 -- # local class 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@15 -- # local i 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@15 -- # local i 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@15 -- # local i 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@15 -- # local i 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:17:28.771 17:18:14 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:17:28.771 17:18:14 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:17:28.771 17:18:14 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:17:28.771 17:18:14 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:29.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:29.339 Waiting for block devices as requested 00:17:29.339 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:29.597 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:29.597 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:17:29.597 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:17:34.912 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:17:34.912 17:18:20 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:17:34.912 17:18:20 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:35.170 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:17:35.170 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:35.170 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:17:35.735 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:17:35.735 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:35.735 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:35.992 17:18:22 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:17:35.992 17:18:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:35.992 17:18:22 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:17:35.992 17:18:22 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:17:35.992 17:18:22 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=72048 00:17:35.992 17:18:22 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:17:35.992 17:18:22 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:17:35.992 17:18:22 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:17:35.992 17:18:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:17:35.992 17:18:22 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:17:35.992 17:18:22 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:17:35.992 17:18:22 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:17:35.992 17:18:22 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:17:35.992 17:18:22 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:17:35.992 17:18:22 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:17:35.992 17:18:22 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:17:35.992 17:18:22 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:17:35.992 17:18:22 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:17:35.992 17:18:22 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:17:36.275 Initializing NVMe Controllers 00:17:36.275 Attaching to 0000:00:10.0 00:17:36.275 Attaching to 0000:00:11.0 00:17:36.275 Attached to 0000:00:10.0 00:17:36.275 Attached to 0000:00:11.0 00:17:36.275 Initialization complete. Starting I/O... 00:17:36.275 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:17:36.275 QEMU NVMe Ctrl (12341 ): 2 I/Os completed (+2) 00:17:36.275 00:17:37.209 QEMU NVMe Ctrl (12340 ): 1102 I/Os completed (+1102) 00:17:37.209 QEMU NVMe Ctrl (12341 ): 1154 I/Os completed (+1152) 00:17:37.209 00:17:38.141 QEMU NVMe Ctrl (12340 ): 2313 I/Os completed (+1211) 00:17:38.141 QEMU NVMe Ctrl (12341 ): 2447 I/Os completed (+1293) 00:17:38.141 00:17:39.515 QEMU NVMe Ctrl (12340 ): 4081 I/Os completed (+1768) 00:17:39.515 QEMU NVMe Ctrl (12341 ): 4220 I/Os completed (+1773) 00:17:39.515 00:17:40.449 QEMU NVMe Ctrl (12340 ): 5733 I/Os completed (+1652) 00:17:40.449 QEMU NVMe Ctrl (12341 ): 5899 I/Os completed (+1679) 00:17:40.449 00:17:41.383 QEMU NVMe Ctrl (12340 ): 7378 I/Os completed (+1645) 00:17:41.383 QEMU NVMe Ctrl (12341 ): 7578 I/Os completed (+1679) 00:17:41.383 00:17:41.976 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:41.976 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:41.976 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:41.976 [2024-07-24 17:18:28.122050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:17:41.976 Controller removed: QEMU NVMe Ctrl (12340 ) 00:17:41.976 [2024-07-24 17:18:28.124048] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:41.976 [2024-07-24 17:18:28.124129] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:41.976 [2024-07-24 17:18:28.124160] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:41.976 [2024-07-24 17:18:28.124188] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:41.976 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:17:41.976 [2024-07-24 17:18:28.127037] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:41.976 [2024-07-24 17:18:28.127094] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:41.976 [2024-07-24 17:18:28.127118] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:41.976 [2024-07-24 17:18:28.127141] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:41.976 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:41.976 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:41.976 [2024-07-24 17:18:28.150387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:17:41.976 Controller removed: QEMU NVMe Ctrl (12341 ) 00:17:41.976 [2024-07-24 17:18:28.152219] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:41.976 [2024-07-24 17:18:28.152270] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:41.976 [2024-07-24 17:18:28.152311] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:41.976 [2024-07-24 17:18:28.152336] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:41.976 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:17:41.976 [2024-07-24 17:18:28.155050] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:41.976 [2024-07-24 17:18:28.155100] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:41.976 [2024-07-24 17:18:28.155131] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:41.976 [2024-07-24 17:18:28.155153] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:41.976 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:17:41.976 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:41.976 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:17:41.976 EAL: Scan for (pci) bus failed. 00:17:42.234 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:42.234 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:42.234 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:42.234 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:42.234 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:42.234 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:42.234 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:42.234 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:42.234 Attaching to 0000:00:10.0 00:17:42.234 Attached to 0000:00:10.0 00:17:42.234 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:17:42.234 00:17:42.234 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:42.234 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:42.234 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:42.234 Attaching to 0000:00:11.0 00:17:42.234 Attached to 0000:00:11.0 00:17:43.168 QEMU NVMe Ctrl (12340 ): 1580 I/Os completed (+1580) 00:17:43.168 QEMU NVMe Ctrl (12341 ): 1490 I/Os completed (+1490) 00:17:43.168 00:17:44.543 QEMU NVMe Ctrl (12340 ): 3461 I/Os completed (+1881) 00:17:44.543 QEMU NVMe Ctrl (12341 ): 3429 I/Os completed (+1939) 00:17:44.543 00:17:45.478 QEMU NVMe Ctrl (12340 ): 5349 I/Os completed (+1888) 00:17:45.478 QEMU NVMe Ctrl (12341 ): 5378 I/Os completed (+1949) 00:17:45.478 00:17:46.413 QEMU NVMe Ctrl (12340 ): 7347 I/Os completed (+1998) 00:17:46.413 QEMU NVMe Ctrl (12341 ): 7453 I/Os completed (+2075) 00:17:46.413 00:17:47.347 QEMU NVMe Ctrl (12340 ): 8980 I/Os completed (+1633) 00:17:47.347 QEMU NVMe Ctrl (12341 ): 9118 I/Os completed (+1665) 00:17:47.347 00:17:48.281 QEMU NVMe Ctrl (12340 ): 10500 I/Os completed (+1520) 00:17:48.281 QEMU NVMe Ctrl (12341 ): 10693 I/Os completed (+1575) 00:17:48.281 00:17:49.216 QEMU NVMe Ctrl (12340 ): 12016 I/Os completed (+1516) 00:17:49.216 QEMU NVMe Ctrl (12341 ): 12235 I/Os completed (+1542) 00:17:49.216 00:17:50.151 QEMU NVMe Ctrl (12340 ): 13550 I/Os completed (+1534) 00:17:50.151 QEMU NVMe Ctrl (12341 ): 13802 I/Os completed (+1567) 00:17:50.151 00:17:51.527 QEMU NVMe Ctrl (12340 ): 15162 I/Os completed (+1612) 00:17:51.527 QEMU NVMe Ctrl (12341 ): 15456 I/Os completed (+1654) 00:17:51.527 00:17:52.466 QEMU NVMe Ctrl (12340 ): 16817 I/Os completed (+1655) 00:17:52.466 QEMU NVMe Ctrl (12341 ): 17181 I/Os completed (+1725) 00:17:52.466 00:17:53.401 QEMU NVMe Ctrl (12340 ): 18421 I/Os completed (+1604) 00:17:53.401 QEMU NVMe Ctrl (12341 ): 18811 I/Os completed (+1630) 00:17:53.401 00:17:54.336 QEMU NVMe Ctrl (12340 ): 20048 I/Os completed (+1627) 00:17:54.336 QEMU NVMe Ctrl (12341 ): 20543 I/Os completed (+1732) 00:17:54.336 00:17:54.336 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:17:54.336 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:54.336 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:54.336 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:54.336 [2024-07-24 17:18:40.437027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:17:54.336 Controller removed: QEMU NVMe Ctrl (12340 ) 00:17:54.336 [2024-07-24 17:18:40.439530] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:54.336 [2024-07-24 17:18:40.439822] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:54.336 [2024-07-24 17:18:40.439923] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:54.336 [2024-07-24 17:18:40.440021] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:54.336 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:17:54.336 [2024-07-24 17:18:40.443579] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:54.336 [2024-07-24 17:18:40.443671] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:54.336 [2024-07-24 17:18:40.443706] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:54.336 [2024-07-24 17:18:40.443735] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:54.336 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:54.336 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:54.336 [2024-07-24 17:18:40.464015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:17:54.336 Controller removed: QEMU NVMe Ctrl (12341 ) 00:17:54.336 [2024-07-24 17:18:40.466173] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:54.336 [2024-07-24 17:18:40.466257] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:54.336 [2024-07-24 17:18:40.466306] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:54.336 [2024-07-24 17:18:40.466335] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:54.336 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:17:54.336 [2024-07-24 17:18:40.469301] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:54.336 [2024-07-24 17:18:40.469365] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:54.336 [2024-07-24 17:18:40.469401] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:54.336 [2024-07-24 17:18:40.469430] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:54.336 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:17:54.336 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:54.336 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:17:54.336 EAL: Scan for (pci) bus failed. 00:17:54.336 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:54.336 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:54.336 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:54.595 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:54.595 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:54.595 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:54.595 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:54.595 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:54.595 Attaching to 0000:00:10.0 00:17:54.595 Attached to 0000:00:10.0 00:17:54.595 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:54.595 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:54.595 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:54.595 Attaching to 0000:00:11.0 00:17:54.595 Attached to 0000:00:11.0 00:17:55.170 QEMU NVMe Ctrl (12340 ): 1120 I/Os completed (+1120) 00:17:55.170 QEMU NVMe Ctrl (12341 ): 1022 I/Os completed (+1022) 00:17:55.170 00:17:56.543 QEMU NVMe Ctrl (12340 ): 3044 I/Os completed (+1924) 00:17:56.543 QEMU NVMe Ctrl (12341 ): 3150 I/Os completed (+2128) 00:17:56.543 00:17:57.487 QEMU NVMe Ctrl (12340 ): 4752 I/Os completed (+1708) 00:17:57.487 QEMU NVMe Ctrl (12341 ): 4901 I/Os completed (+1751) 00:17:57.487 00:17:58.429 QEMU NVMe Ctrl (12340 ): 6709 I/Os completed (+1957) 00:17:58.429 QEMU NVMe Ctrl (12341 ): 7129 I/Os completed (+2228) 00:17:58.429 00:17:59.362 QEMU NVMe Ctrl (12340 ): 8480 I/Os completed (+1771) 00:17:59.362 QEMU NVMe Ctrl (12341 ): 8944 I/Os completed (+1815) 00:17:59.362 00:18:00.296 QEMU NVMe Ctrl (12340 ): 10236 I/Os completed (+1756) 00:18:00.296 QEMU NVMe Ctrl (12341 ): 10776 I/Os completed (+1832) 00:18:00.296 00:18:01.228 QEMU NVMe Ctrl (12340 ): 11811 I/Os completed (+1575) 00:18:01.228 QEMU NVMe Ctrl (12341 ): 12418 I/Os completed (+1642) 00:18:01.228 00:18:02.160 QEMU NVMe Ctrl (12340 ): 13451 I/Os completed (+1640) 00:18:02.160 QEMU NVMe Ctrl (12341 ): 14174 I/Os completed (+1756) 00:18:02.160 00:18:03.533 QEMU NVMe Ctrl (12340 ): 14898 I/Os completed (+1447) 00:18:03.533 QEMU NVMe Ctrl (12341 ): 15745 I/Os completed (+1571) 00:18:03.533 00:18:04.467 QEMU NVMe Ctrl (12340 ): 16559 I/Os completed (+1661) 00:18:04.467 QEMU NVMe Ctrl (12341 ): 17535 I/Os completed (+1790) 00:18:04.467 00:18:05.403 QEMU NVMe Ctrl (12340 ): 18135 I/Os completed (+1576) 00:18:05.403 QEMU NVMe Ctrl (12341 ): 19173 I/Os completed (+1638) 00:18:05.403 00:18:06.334 QEMU NVMe Ctrl (12340 ): 19875 I/Os completed (+1740) 00:18:06.334 QEMU NVMe Ctrl (12341 ): 21006 I/Os completed (+1833) 00:18:06.334 00:18:06.592 17:18:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:18:06.592 17:18:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:06.592 17:18:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:06.592 17:18:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:06.592 [2024-07-24 17:18:52.750380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:18:06.592 Controller removed: QEMU NVMe Ctrl (12340 ) 00:18:06.592 [2024-07-24 17:18:52.752456] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:06.592 [2024-07-24 17:18:52.752669] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:06.592 [2024-07-24 17:18:52.752864] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:06.592 [2024-07-24 17:18:52.753075] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:06.592 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:18:06.592 [2024-07-24 17:18:52.756310] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:06.592 [2024-07-24 17:18:52.756510] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:06.592 [2024-07-24 17:18:52.756547] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:06.592 [2024-07-24 17:18:52.756573] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:06.592 17:18:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:06.592 17:18:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:06.592 [2024-07-24 17:18:52.783007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:18:06.592 Controller removed: QEMU NVMe Ctrl (12341 ) 00:18:06.592 [2024-07-24 17:18:52.784890] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:06.592 [2024-07-24 17:18:52.785081] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:06.592 [2024-07-24 17:18:52.785233] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:06.592 [2024-07-24 17:18:52.785269] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:06.592 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:18:06.592 [2024-07-24 17:18:52.787998] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:06.592 [2024-07-24 17:18:52.788090] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:06.592 [2024-07-24 17:18:52.788119] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:06.592 [2024-07-24 17:18:52.788139] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:06.592 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:18:06.592 EAL: Scan for (pci) bus failed. 00:18:06.592 17:18:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:18:06.592 17:18:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:06.849 17:18:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:06.849 17:18:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:06.849 17:18:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:06.849 17:18:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:06.849 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:06.849 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:06.849 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:06.849 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:06.849 Attaching to 0000:00:10.0 00:18:06.849 Attached to 0000:00:10.0 00:18:07.107 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:07.107 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:07.107 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:07.107 Attaching to 0000:00:11.0 00:18:07.107 Attached to 0000:00:11.0 00:18:07.107 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:18:07.107 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:18:07.107 [2024-07-24 17:18:53.125676] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:18:19.329 17:19:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:18:19.329 17:19:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:19.329 17:19:05 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.99 00:18:19.329 17:19:05 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.99 00:18:19.329 17:19:05 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:18:19.329 17:19:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.99 00:18:19.329 17:19:05 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.99 2 00:18:19.329 remove_attach_helper took 42.99s to complete (handling 2 nvme drive(s)) 17:19:05 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:18:25.913 17:19:11 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 72048 00:18:25.913 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (72048) - No such process 00:18:25.913 17:19:11 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 72048 00:18:25.913 17:19:11 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:18:25.913 17:19:11 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:18:25.913 17:19:11 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:18:25.913 17:19:11 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=72591 00:18:25.913 17:19:11 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:18:25.913 17:19:11 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.913 17:19:11 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 72591 00:18:25.913 17:19:11 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 72591 ']' 00:18:25.913 17:19:11 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.913 17:19:11 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:25.913 17:19:11 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.913 17:19:11 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:25.913 17:19:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:25.913 [2024-07-24 17:19:11.285577] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:18:25.913 [2024-07-24 17:19:11.285790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72591 ] 00:18:25.913 [2024-07-24 17:19:11.458873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.913 [2024-07-24 17:19:11.714146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.479 17:19:12 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:26.479 17:19:12 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:18:26.479 17:19:12 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:18:26.479 17:19:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.479 17:19:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:26.479 17:19:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.479 17:19:12 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:18:26.479 17:19:12 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:18:26.479 17:19:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:18:26.479 17:19:12 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:18:26.479 17:19:12 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:18:26.479 17:19:12 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:18:26.479 17:19:12 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:18:26.479 17:19:12 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:18:26.479 17:19:12 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:18:26.479 17:19:12 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:18:26.479 17:19:12 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:18:26.479 17:19:12 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:18:26.479 17:19:12 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:18:33.036 17:19:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:33.036 17:19:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:33.036 17:19:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:33.036 17:19:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:33.036 17:19:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:33.036 17:19:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:33.036 17:19:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:33.036 17:19:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:33.036 17:19:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:33.036 17:19:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:33.036 17:19:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:33.036 17:19:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.036 17:19:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:33.036 [2024-07-24 17:19:18.565649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:18:33.036 [2024-07-24 17:19:18.568725] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:33.036 [2024-07-24 17:19:18.568808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.036 [2024-07-24 17:19:18.568849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.036 [2024-07-24 17:19:18.568877] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:33.036 [2024-07-24 17:19:18.568897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.037 [2024-07-24 17:19:18.568911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.037 [2024-07-24 17:19:18.568945] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:33.037 [2024-07-24 17:19:18.568959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.037 [2024-07-24 17:19:18.568975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.037 [2024-07-24 17:19:18.568990] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:33.037 [2024-07-24 17:19:18.569025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.037 [2024-07-24 17:19:18.569039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.037 17:19:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.037 17:19:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:18:33.037 17:19:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:33.037 [2024-07-24 17:19:18.965707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:18:33.037 [2024-07-24 17:19:18.968829] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:33.037 [2024-07-24 17:19:18.968887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.037 [2024-07-24 17:19:18.968910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.037 [2024-07-24 17:19:18.968941] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:33.037 [2024-07-24 17:19:18.968957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.037 [2024-07-24 17:19:18.968974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.037 [2024-07-24 17:19:18.968990] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:33.037 [2024-07-24 17:19:18.969006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.037 [2024-07-24 17:19:18.969036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.037 [2024-07-24 17:19:18.969053] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:33.037 [2024-07-24 17:19:18.969066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.037 [2024-07-24 17:19:18.969082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.037 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:18:33.037 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:33.037 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:33.037 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:33.037 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:33.037 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:33.037 17:19:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.037 17:19:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:33.037 17:19:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.037 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:33.037 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:33.037 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:33.037 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:33.037 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:33.295 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:33.295 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:33.295 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:33.295 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:33.295 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:33.295 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:33.295 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:33.295 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:45.492 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:45.492 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:45.492 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:45.492 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:45.492 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:45.492 17:19:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.492 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:45.492 17:19:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:45.492 17:19:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.492 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:45.492 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:45.492 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:45.492 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:45.492 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:45.492 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:45.492 [2024-07-24 17:19:31.565853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:18:45.492 [2024-07-24 17:19:31.569315] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:45.492 [2024-07-24 17:19:31.569390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.492 [2024-07-24 17:19:31.569421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.492 [2024-07-24 17:19:31.569468] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:45.492 [2024-07-24 17:19:31.569511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.492 [2024-07-24 17:19:31.569526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.493 [2024-07-24 17:19:31.569545] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:45.493 [2024-07-24 17:19:31.569559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.493 [2024-07-24 17:19:31.569575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.493 [2024-07-24 17:19:31.569590] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:45.493 [2024-07-24 17:19:31.569606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.493 [2024-07-24 17:19:31.569620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.493 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:45.493 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:45.493 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:45.493 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:45.493 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:45.493 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:45.493 17:19:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.493 17:19:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:45.493 17:19:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.493 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:18:45.493 17:19:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:45.751 [2024-07-24 17:19:31.965888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:18:45.751 [2024-07-24 17:19:31.968986] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:45.751 [2024-07-24 17:19:31.969078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.751 [2024-07-24 17:19:31.969099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.751 [2024-07-24 17:19:31.969147] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:45.752 [2024-07-24 17:19:31.969186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.752 [2024-07-24 17:19:31.969204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.752 [2024-07-24 17:19:31.969265] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:45.752 [2024-07-24 17:19:31.969284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.752 [2024-07-24 17:19:31.969299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.752 [2024-07-24 17:19:31.969318] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:45.752 [2024-07-24 17:19:31.969332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.752 [2024-07-24 17:19:31.969348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.010 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:18:46.010 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:46.010 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:46.010 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:46.010 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:46.010 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:46.010 17:19:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.010 17:19:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:46.010 17:19:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.010 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:46.010 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:46.268 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:46.268 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:46.268 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:46.268 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:46.268 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:46.268 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:46.268 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:46.268 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:46.268 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:46.268 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:46.269 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:58.472 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:58.472 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:58.472 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:58.472 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:58.472 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:58.472 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:58.472 17:19:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.472 17:19:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:58.472 17:19:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.472 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:58.472 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:58.473 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:58.473 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:58.473 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:58.473 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:58.473 [2024-07-24 17:19:44.566100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:18:58.473 [2024-07-24 17:19:44.569705] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:58.473 [2024-07-24 17:19:44.569760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.473 [2024-07-24 17:19:44.569786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.473 [2024-07-24 17:19:44.569815] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:58.473 [2024-07-24 17:19:44.569839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.473 [2024-07-24 17:19:44.569857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.473 [2024-07-24 17:19:44.569883] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:58.473 [2024-07-24 17:19:44.569902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.473 [2024-07-24 17:19:44.569922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.473 [2024-07-24 17:19:44.569940] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:58.473 [2024-07-24 17:19:44.569959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.473 [2024-07-24 17:19:44.569975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.473 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:58.473 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:58.473 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:58.473 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:58.473 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:58.473 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:58.473 17:19:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.473 17:19:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:58.473 17:19:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.473 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:18:58.473 17:19:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:58.731 [2024-07-24 17:19:44.966129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:18:58.731 [2024-07-24 17:19:44.969125] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:58.731 [2024-07-24 17:19:44.969200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.731 [2024-07-24 17:19:44.969222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.731 [2024-07-24 17:19:44.969263] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:58.731 [2024-07-24 17:19:44.969294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.731 [2024-07-24 17:19:44.969343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.989 [2024-07-24 17:19:44.969359] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:58.989 [2024-07-24 17:19:44.969383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.989 [2024-07-24 17:19:44.969398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.989 [2024-07-24 17:19:44.969427] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:58.989 [2024-07-24 17:19:44.969444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.989 [2024-07-24 17:19:44.969464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.989 17:19:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:18:58.989 17:19:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:58.989 17:19:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:58.989 17:19:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:58.989 17:19:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:58.989 17:19:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:58.989 17:19:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.989 17:19:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:58.989 17:19:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.989 17:19:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:58.989 17:19:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:59.247 17:19:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:59.247 17:19:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:59.247 17:19:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:59.247 17:19:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:59.247 17:19:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:59.247 17:19:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:59.247 17:19:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:59.247 17:19:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:59.513 17:19:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:59.514 17:19:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:59.514 17:19:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:19:11.714 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:19:11.714 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:19:11.714 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:19:11.714 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:11.714 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:11.714 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:11.714 17:19:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.714 17:19:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:11.714 17:19:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.714 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:19:11.714 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:19:11.714 17:19:57 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.11 00:19:11.714 17:19:57 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.11 00:19:11.714 17:19:57 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:19:11.714 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.11 00:19:11.714 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.11 2 00:19:11.714 remove_attach_helper took 45.11s to complete (handling 2 nvme drive(s)) 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:19:11.714 17:19:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.714 17:19:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:11.714 17:19:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.714 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:19:11.714 17:19:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.714 17:19:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:11.714 17:19:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.714 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:19:11.714 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:19:11.714 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:19:11.714 17:19:57 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:19:11.714 17:19:57 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:19:11.714 17:19:57 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:19:11.714 17:19:57 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:19:11.714 17:19:57 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:19:11.714 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:19:11.714 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:19:11.714 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:19:11.714 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:19:11.714 17:19:57 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:19:18.268 17:20:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:19:18.268 17:20:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:18.268 17:20:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:18.268 17:20:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:18.268 17:20:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:18.268 17:20:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:19:18.268 17:20:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:18.268 17:20:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:18.268 17:20:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:18.268 17:20:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.268 17:20:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:18.268 17:20:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:18.268 17:20:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:18.268 17:20:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.268 [2024-07-24 17:20:03.701420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:19:18.268 [2024-07-24 17:20:03.703668] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:18.268 [2024-07-24 17:20:03.703745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.268 [2024-07-24 17:20:03.703776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.268 [2024-07-24 17:20:03.703804] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:18.268 [2024-07-24 17:20:03.703822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.268 [2024-07-24 17:20:03.703836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.268 [2024-07-24 17:20:03.703853] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:18.268 [2024-07-24 17:20:03.703867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.268 [2024-07-24 17:20:03.703883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.268 [2024-07-24 17:20:03.703898] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:18.268 [2024-07-24 17:20:03.703913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.268 [2024-07-24 17:20:03.703927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.268 17:20:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:19:18.268 17:20:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:19:18.268 [2024-07-24 17:20:04.101431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:19:18.268 [2024-07-24 17:20:04.103528] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:18.268 [2024-07-24 17:20:04.103615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.268 [2024-07-24 17:20:04.103637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.268 [2024-07-24 17:20:04.103692] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:18.268 [2024-07-24 17:20:04.103718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.268 [2024-07-24 17:20:04.103735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.268 [2024-07-24 17:20:04.103752] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:18.268 [2024-07-24 17:20:04.103768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.268 [2024-07-24 17:20:04.103781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.269 [2024-07-24 17:20:04.103799] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:18.269 [2024-07-24 17:20:04.103812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.269 [2024-07-24 17:20:04.103828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.269 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:19:18.269 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:18.269 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:18.269 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:18.269 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:18.269 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:18.269 17:20:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.269 17:20:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:18.269 17:20:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.269 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:19:18.269 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:19:18.269 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:18.269 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:18.269 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:19:18.269 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:19:18.269 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:18.269 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:18.269 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:18.269 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:19:18.527 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:19:18.527 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:18.527 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:30.728 17:20:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.728 17:20:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:30.728 17:20:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:30.728 17:20:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.728 17:20:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:30.728 [2024-07-24 17:20:16.701602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:19:30.728 [2024-07-24 17:20:16.704066] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:30.728 [2024-07-24 17:20:16.704240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.728 [2024-07-24 17:20:16.704427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.728 [2024-07-24 17:20:16.704643] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:30.728 [2024-07-24 17:20:16.704802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.728 [2024-07-24 17:20:16.704979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.728 [2024-07-24 17:20:16.705293] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:30.728 [2024-07-24 17:20:16.705418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.728 [2024-07-24 17:20:16.705590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.728 [2024-07-24 17:20:16.705805] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:30.728 [2024-07-24 17:20:16.705960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.728 [2024-07-24 17:20:16.706121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.728 17:20:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:19:30.728 17:20:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:19:30.987 [2024-07-24 17:20:17.101634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:19:30.987 [2024-07-24 17:20:17.103959] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:30.987 [2024-07-24 17:20:17.104229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.987 [2024-07-24 17:20:17.104400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.987 [2024-07-24 17:20:17.104660] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:30.987 [2024-07-24 17:20:17.104819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.987 [2024-07-24 17:20:17.104993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.987 [2024-07-24 17:20:17.105169] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:30.987 [2024-07-24 17:20:17.105334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.987 [2024-07-24 17:20:17.105510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.988 [2024-07-24 17:20:17.105699] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:30.988 [2024-07-24 17:20:17.105860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.988 [2024-07-24 17:20:17.106033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.246 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:19:31.246 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:31.246 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:31.246 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:31.246 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:31.246 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:31.246 17:20:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.246 17:20:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:31.246 17:20:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.246 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:19:31.246 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:19:31.246 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:31.246 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:31.246 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:19:31.246 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:19:31.505 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:31.505 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:31.505 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:31.505 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:19:31.505 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:19:31.505 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:31.505 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:19:43.706 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:19:43.706 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:19:43.706 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:19:43.706 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:43.706 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:43.706 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:43.706 17:20:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.706 17:20:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:43.706 17:20:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.706 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:19:43.706 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:19:43.706 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:43.706 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:43.706 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:43.706 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:43.706 [2024-07-24 17:20:29.701805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:19:43.706 [2024-07-24 17:20:29.704277] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:43.706 [2024-07-24 17:20:29.704497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.706 [2024-07-24 17:20:29.704753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.706 [2024-07-24 17:20:29.704904] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:43.706 [2024-07-24 17:20:29.705050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.706 [2024-07-24 17:20:29.705213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.706 [2024-07-24 17:20:29.705365] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:43.706 [2024-07-24 17:20:29.705415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.706 [2024-07-24 17:20:29.705576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.706 [2024-07-24 17:20:29.705768] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:43.706 [2024-07-24 17:20:29.705831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.706 [2024-07-24 17:20:29.706109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.706 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:19:43.706 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:43.706 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:43.706 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:43.706 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:43.706 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:43.706 17:20:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.707 17:20:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:43.707 17:20:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.707 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:19:43.707 17:20:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:19:43.965 [2024-07-24 17:20:30.101832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:19:43.965 [2024-07-24 17:20:30.104231] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:43.965 [2024-07-24 17:20:30.104460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.965 [2024-07-24 17:20:30.104622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.965 [2024-07-24 17:20:30.104942] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:43.965 [2024-07-24 17:20:30.104996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.965 [2024-07-24 17:20:30.105164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.965 [2024-07-24 17:20:30.105234] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:43.965 [2024-07-24 17:20:30.105370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.965 [2024-07-24 17:20:30.105444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.965 [2024-07-24 17:20:30.105583] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:43.965 [2024-07-24 17:20:30.105638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.965 [2024-07-24 17:20:30.105820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 17:20:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:19:44.223 17:20:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:44.223 17:20:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:44.223 17:20:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:44.223 17:20:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:44.223 17:20:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:44.223 17:20:30 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.223 17:20:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:44.223 17:20:30 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.223 17:20:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:19:44.223 17:20:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:19:44.223 17:20:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:44.223 17:20:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:44.223 17:20:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:19:44.480 17:20:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:19:44.480 17:20:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:44.480 17:20:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:44.480 17:20:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:44.480 17:20:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:19:44.480 17:20:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:19:44.480 17:20:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:44.480 17:20:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:19:56.700 17:20:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:19:56.700 17:20:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:19:56.700 17:20:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:19:56.701 17:20:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:56.701 17:20:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:56.701 17:20:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:56.701 17:20:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.701 17:20:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:56.701 17:20:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.701 17:20:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:19:56.701 17:20:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:19:56.701 17:20:42 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.08 00:19:56.701 17:20:42 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.08 00:19:56.701 17:20:42 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:19:56.701 17:20:42 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.08 00:19:56.701 17:20:42 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.08 2 00:19:56.701 remove_attach_helper took 45.08s to complete (handling 2 nvme drive(s)) 17:20:42 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:19:56.701 17:20:42 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 72591 00:19:56.701 17:20:42 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 72591 ']' 00:19:56.701 17:20:42 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 72591 00:19:56.701 17:20:42 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:19:56.701 17:20:42 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.701 17:20:42 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72591 00:19:56.701 killing process with pid 72591 00:19:56.701 17:20:42 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:56.701 17:20:42 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:56.701 17:20:42 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72591' 00:19:56.701 17:20:42 sw_hotplug -- common/autotest_common.sh@969 -- # kill 72591 00:19:56.701 17:20:42 sw_hotplug -- common/autotest_common.sh@974 -- # wait 72591 00:19:58.598 17:20:44 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:59.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:59.420 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:59.420 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:59.678 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:19:59.678 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:19:59.678 00:19:59.678 real 2m31.577s 00:19:59.678 user 1m51.872s 00:19:59.678 sys 0m19.356s 00:19:59.678 17:20:45 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:59.678 17:20:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:59.678 ************************************ 00:19:59.678 END TEST sw_hotplug 00:19:59.678 ************************************ 00:19:59.678 17:20:45 -- spdk/autotest.sh@251 -- # [[ 1 -eq 1 ]] 00:19:59.678 17:20:45 -- spdk/autotest.sh@252 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:19:59.678 17:20:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:59.678 17:20:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:59.678 17:20:45 -- common/autotest_common.sh@10 -- # set +x 00:19:59.678 ************************************ 00:19:59.678 START TEST nvme_xnvme 00:19:59.678 ************************************ 00:19:59.678 17:20:45 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:19:59.936 * Looking for test storage... 00:19:59.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:19:59.936 17:20:45 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:59.936 17:20:45 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.936 17:20:45 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.936 17:20:45 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.936 17:20:45 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.936 17:20:45 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.936 17:20:45 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.936 17:20:45 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:19:59.936 17:20:45 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.936 17:20:45 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:19:59.936 17:20:45 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:59.936 17:20:45 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:59.936 17:20:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:59.936 ************************************ 00:19:59.936 START TEST xnvme_to_malloc_dd_copy 00:19:59.936 ************************************ 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:19:59.936 17:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:19:59.936 { 00:19:59.937 "subsystems": [ 00:19:59.937 { 00:19:59.937 "subsystem": "bdev", 00:19:59.937 "config": [ 00:19:59.937 { 00:19:59.937 "params": { 00:19:59.937 "block_size": 512, 00:19:59.937 "num_blocks": 2097152, 00:19:59.937 "name": "malloc0" 00:19:59.937 }, 00:19:59.937 "method": "bdev_malloc_create" 00:19:59.937 }, 00:19:59.937 { 00:19:59.937 "params": { 00:19:59.937 "io_mechanism": "libaio", 00:19:59.937 "filename": "/dev/nullb0", 00:19:59.937 "name": "null0" 00:19:59.937 }, 00:19:59.937 "method": "bdev_xnvme_create" 00:19:59.937 }, 00:19:59.937 { 00:19:59.937 "method": "bdev_wait_for_examine" 00:19:59.937 } 00:19:59.937 ] 00:19:59.937 } 00:19:59.937 ] 00:19:59.937 } 00:19:59.937 [2024-07-24 17:20:46.094012] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:59.937 [2024-07-24 17:20:46.094410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73937 ] 00:20:00.194 [2024-07-24 17:20:46.274223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.452 [2024-07-24 17:20:46.560958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.135  Copying: 172/1024 [MB] (172 MBps) Copying: 343/1024 [MB] (171 MBps) Copying: 509/1024 [MB] (166 MBps) Copying: 675/1024 [MB] (165 MBps) Copying: 843/1024 [MB] (167 MBps) Copying: 1012/1024 [MB] (169 MBps) Copying: 1024/1024 [MB] (average 168 MBps) 00:20:12.135 00:20:12.135 17:20:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:20:12.135 17:20:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:20:12.135 17:20:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:20:12.135 17:20:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:20:12.135 { 00:20:12.135 "subsystems": [ 00:20:12.135 { 00:20:12.135 "subsystem": "bdev", 00:20:12.135 "config": [ 00:20:12.135 { 00:20:12.135 "params": { 00:20:12.135 "block_size": 512, 00:20:12.135 "num_blocks": 2097152, 00:20:12.135 "name": "malloc0" 00:20:12.135 }, 00:20:12.135 "method": "bdev_malloc_create" 00:20:12.135 }, 00:20:12.135 { 00:20:12.135 "params": { 00:20:12.135 "io_mechanism": "libaio", 00:20:12.135 "filename": "/dev/nullb0", 00:20:12.135 "name": "null0" 00:20:12.135 }, 00:20:12.135 "method": "bdev_xnvme_create" 00:20:12.135 }, 00:20:12.135 { 00:20:12.135 "method": "bdev_wait_for_examine" 00:20:12.135 } 00:20:12.135 ] 00:20:12.135 } 00:20:12.135 ] 00:20:12.135 } 00:20:12.135 [2024-07-24 17:20:57.651015] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:12.135 [2024-07-24 17:20:57.651237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74063 ] 00:20:12.135 [2024-07-24 17:20:57.827755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.135 [2024-07-24 17:20:58.045547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.220  Copying: 181/1024 [MB] (181 MBps) Copying: 362/1024 [MB] (180 MBps) Copying: 540/1024 [MB] (177 MBps) Copying: 717/1024 [MB] (177 MBps) Copying: 892/1024 [MB] (175 MBps) Copying: 1024/1024 [MB] (average 179 MBps) 00:20:23.220 00:20:23.220 17:21:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:20:23.220 17:21:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:20:23.220 17:21:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:20:23.220 17:21:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:20:23.220 17:21:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:20:23.220 17:21:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:20:23.220 { 00:20:23.220 "subsystems": [ 00:20:23.220 { 00:20:23.220 "subsystem": "bdev", 00:20:23.220 "config": [ 00:20:23.220 { 00:20:23.220 "params": { 00:20:23.220 "block_size": 512, 00:20:23.220 "num_blocks": 2097152, 00:20:23.220 "name": "malloc0" 00:20:23.220 }, 00:20:23.220 "method": "bdev_malloc_create" 00:20:23.220 }, 00:20:23.220 { 00:20:23.220 "params": { 00:20:23.220 "io_mechanism": "io_uring", 00:20:23.220 "filename": "/dev/nullb0", 00:20:23.220 "name": "null0" 00:20:23.220 }, 00:20:23.220 "method": "bdev_xnvme_create" 00:20:23.220 }, 00:20:23.220 { 00:20:23.220 "method": "bdev_wait_for_examine" 00:20:23.220 } 00:20:23.220 ] 00:20:23.220 } 00:20:23.220 ] 00:20:23.220 } 00:20:23.220 [2024-07-24 17:21:08.716761] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:23.220 [2024-07-24 17:21:08.716945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74188 ] 00:20:23.220 [2024-07-24 17:21:08.891246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.220 [2024-07-24 17:21:09.119412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.801  Copying: 190/1024 [MB] (190 MBps) Copying: 378/1024 [MB] (188 MBps) Copying: 571/1024 [MB] (192 MBps) Copying: 765/1024 [MB] (193 MBps) Copying: 952/1024 [MB] (187 MBps) Copying: 1024/1024 [MB] (average 190 MBps) 00:20:33.801 00:20:33.801 17:21:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:20:33.801 17:21:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:20:33.801 17:21:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:20:33.801 17:21:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:20:33.801 { 00:20:33.801 "subsystems": [ 00:20:33.801 { 00:20:33.801 "subsystem": "bdev", 00:20:33.801 "config": [ 00:20:33.801 { 00:20:33.801 "params": { 00:20:33.801 "block_size": 512, 00:20:33.801 "num_blocks": 2097152, 00:20:33.801 "name": "malloc0" 00:20:33.801 }, 00:20:33.801 "method": "bdev_malloc_create" 00:20:33.801 }, 00:20:33.801 { 00:20:33.801 "params": { 00:20:33.801 "io_mechanism": "io_uring", 00:20:33.801 "filename": "/dev/nullb0", 00:20:33.801 "name": "null0" 00:20:33.801 }, 00:20:33.801 "method": "bdev_xnvme_create" 00:20:33.801 }, 00:20:33.801 { 00:20:33.801 "method": "bdev_wait_for_examine" 00:20:33.801 } 00:20:33.801 ] 00:20:33.801 } 00:20:33.801 ] 00:20:33.801 } 00:20:33.801 [2024-07-24 17:21:19.397135] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:33.801 [2024-07-24 17:21:19.397574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74303 ] 00:20:33.801 [2024-07-24 17:21:19.576288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.801 [2024-07-24 17:21:19.800829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.683  Copying: 192/1024 [MB] (192 MBps) Copying: 384/1024 [MB] (191 MBps) Copying: 578/1024 [MB] (194 MBps) Copying: 771/1024 [MB] (192 MBps) Copying: 961/1024 [MB] (190 MBps) Copying: 1024/1024 [MB] (average 192 MBps) 00:20:43.683 00:20:43.683 17:21:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:20:43.683 17:21:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:20:43.683 00:20:43.683 real 0m43.952s 00:20:43.683 user 0m37.784s 00:20:43.683 sys 0m5.564s 00:20:43.683 17:21:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:43.683 17:21:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:20:43.683 ************************************ 00:20:43.683 END TEST xnvme_to_malloc_dd_copy 00:20:43.683 ************************************ 00:20:43.942 17:21:29 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:43.942 17:21:29 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:43.942 17:21:29 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:43.942 17:21:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:43.942 ************************************ 00:20:43.942 START TEST xnvme_bdevperf 00:20:43.942 ************************************ 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:43.942 17:21:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:43.942 { 00:20:43.942 "subsystems": [ 00:20:43.942 { 00:20:43.942 "subsystem": "bdev", 00:20:43.942 "config": [ 00:20:43.942 { 00:20:43.942 "params": { 00:20:43.942 "io_mechanism": "libaio", 00:20:43.942 "filename": "/dev/nullb0", 00:20:43.942 "name": "null0" 00:20:43.942 }, 00:20:43.942 "method": "bdev_xnvme_create" 00:20:43.942 }, 00:20:43.942 { 00:20:43.942 "method": "bdev_wait_for_examine" 00:20:43.942 } 00:20:43.942 ] 00:20:43.942 } 00:20:43.942 ] 00:20:43.942 } 00:20:43.942 [2024-07-24 17:21:30.078219] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:43.942 [2024-07-24 17:21:30.078396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74454 ] 00:20:44.200 [2024-07-24 17:21:30.242908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.459 [2024-07-24 17:21:30.471978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.717 Running I/O for 5 seconds... 00:20:49.981 00:20:49.981 Latency(us) 00:20:49.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.981 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:49.981 null0 : 5.00 126566.16 494.40 0.00 0.00 502.56 175.01 960.70 00:20:49.981 =================================================================================================================== 00:20:49.981 Total : 126566.16 494.40 0.00 0.00 502.56 175.01 960.70 00:20:50.915 17:21:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:20:50.915 17:21:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:20:50.915 17:21:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:20:50.915 17:21:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:20:50.915 17:21:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:50.915 17:21:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:50.915 { 00:20:50.915 "subsystems": [ 00:20:50.915 { 00:20:50.915 "subsystem": "bdev", 00:20:50.915 "config": [ 00:20:50.915 { 00:20:50.915 "params": { 00:20:50.915 "io_mechanism": "io_uring", 00:20:50.915 "filename": "/dev/nullb0", 00:20:50.915 "name": "null0" 00:20:50.915 }, 00:20:50.915 "method": "bdev_xnvme_create" 00:20:50.915 }, 00:20:50.915 { 00:20:50.915 "method": "bdev_wait_for_examine" 00:20:50.915 } 00:20:50.915 ] 00:20:50.915 } 00:20:50.915 ] 00:20:50.915 } 00:20:50.915 [2024-07-24 17:21:36.995529] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:50.915 [2024-07-24 17:21:36.995701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74528 ] 00:20:51.173 [2024-07-24 17:21:37.159233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.173 [2024-07-24 17:21:37.394766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.807 Running I/O for 5 seconds... 00:20:57.072 00:20:57.072 Latency(us) 00:20:57.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.072 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:57.072 null0 : 5.00 172386.21 673.38 0.00 0.00 368.31 209.45 2055.45 00:20:57.072 =================================================================================================================== 00:20:57.072 Total : 172386.21 673.38 0.00 0.00 368.31 209.45 2055.45 00:20:57.637 17:21:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:20:57.637 17:21:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:20:57.637 00:20:57.637 real 0m13.869s 00:20:57.637 user 0m10.706s 00:20:57.637 sys 0m2.944s 00:20:57.637 17:21:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:57.637 17:21:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:57.637 ************************************ 00:20:57.637 END TEST xnvme_bdevperf 00:20:57.637 ************************************ 00:20:57.895 00:20:57.895 real 0m58.023s 00:20:57.895 user 0m48.563s 00:20:57.895 sys 0m8.625s 00:20:57.895 17:21:43 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:57.895 17:21:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:57.895 ************************************ 00:20:57.895 END TEST nvme_xnvme 00:20:57.895 ************************************ 00:20:57.895 17:21:43 -- spdk/autotest.sh@253 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:20:57.895 17:21:43 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:57.895 17:21:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:57.895 17:21:43 -- common/autotest_common.sh@10 -- # set +x 00:20:57.895 ************************************ 00:20:57.895 START TEST blockdev_xnvme 00:20:57.895 ************************************ 00:20:57.895 17:21:43 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:20:57.895 * Looking for test storage... 00:20:57.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74668 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74668 00:20:57.895 17:21:44 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:57.895 17:21:44 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 74668 ']' 00:20:57.895 17:21:44 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.895 17:21:44 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:57.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.895 17:21:44 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.895 17:21:44 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:57.895 17:21:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:58.153 [2024-07-24 17:21:44.164379] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:58.153 [2024-07-24 17:21:44.164574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74668 ] 00:20:58.153 [2024-07-24 17:21:44.339678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.411 [2024-07-24 17:21:44.552615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.344 17:21:45 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.344 17:21:45 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:20:59.344 17:21:45 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:20:59.344 17:21:45 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:20:59.344 17:21:45 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:20:59.344 17:21:45 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:20:59.344 17:21:45 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:59.616 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:59.616 Waiting for block devices as requested 00:20:59.886 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:59.886 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:59.886 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:21:00.143 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:21:05.406 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:21:05.406 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:21:05.406 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:21:05.407 nvme0n1 00:21:05.407 nvme1n1 00:21:05.407 nvme2n1 00:21:05.407 nvme2n2 00:21:05.407 nvme2n3 00:21:05.407 nvme3n1 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "82b09649-3732-4334-b2c9-0de8ea173185"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "82b09649-3732-4334-b2c9-0de8ea173185",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "03c89791-cc92-431e-9fa1-d7276b72fce2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "03c89791-cc92-431e-9fa1-d7276b72fce2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "8d69c49c-3588-4577-827b-788524064a5c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8d69c49c-3588-4577-827b-788524064a5c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "c8104883-7793-4eef-aa08-8cd28127646f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c8104883-7793-4eef-aa08-8cd28127646f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "9049c42f-18d1-498b-bb9e-3f61de28fa1c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9049c42f-18d1-498b-bb9e-3f61de28fa1c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "049ec522-b3ea-4aca-95cf-d9daba8f4bed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "049ec522-b3ea-4aca-95cf-d9daba8f4bed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:21:05.407 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 74668 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 74668 ']' 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 74668 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74668 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:05.407 killing process with pid 74668 00:21:05.407 17:21:51 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74668' 00:21:05.408 17:21:51 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 74668 00:21:05.408 17:21:51 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 74668 00:21:07.941 17:21:53 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:07.941 17:21:53 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:21:07.941 17:21:53 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:21:07.941 17:21:53 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:07.941 17:21:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:07.941 ************************************ 00:21:07.941 START TEST bdev_hello_world 00:21:07.941 ************************************ 00:21:07.941 17:21:53 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:21:07.941 [2024-07-24 17:21:53.848280] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:07.941 [2024-07-24 17:21:53.848514] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75041 ] 00:21:07.941 [2024-07-24 17:21:54.021867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.201 [2024-07-24 17:21:54.297700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.464 [2024-07-24 17:21:54.695704] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:21:08.464 [2024-07-24 17:21:54.695811] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:21:08.464 [2024-07-24 17:21:54.695838] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:21:08.464 [2024-07-24 17:21:54.698056] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:21:08.464 [2024-07-24 17:21:54.698472] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:21:08.464 [2024-07-24 17:21:54.698508] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:21:08.464 [2024-07-24 17:21:54.698981] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:21:08.464 00:21:08.464 [2024-07-24 17:21:54.699031] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:21:09.848 00:21:09.848 real 0m2.034s 00:21:09.848 user 0m1.631s 00:21:09.848 sys 0m0.286s 00:21:09.848 17:21:55 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:09.848 17:21:55 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:09.848 ************************************ 00:21:09.848 END TEST bdev_hello_world 00:21:09.848 ************************************ 00:21:09.848 17:21:55 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:21:09.848 17:21:55 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:09.848 17:21:55 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:09.848 17:21:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:09.848 ************************************ 00:21:09.848 START TEST bdev_bounds 00:21:09.848 ************************************ 00:21:09.848 17:21:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:21:09.848 17:21:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=75083 00:21:09.848 17:21:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:21:09.848 17:21:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:09.848 Process bdevio pid: 75083 00:21:09.848 17:21:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 75083' 00:21:09.848 17:21:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 75083 00:21:09.848 17:21:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 75083 ']' 00:21:09.848 17:21:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.848 17:21:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:09.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.848 17:21:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.848 17:21:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:09.848 17:21:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:09.848 [2024-07-24 17:21:55.943234] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:09.848 [2024-07-24 17:21:55.944018] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75083 ] 00:21:10.106 [2024-07-24 17:21:56.118791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:10.364 [2024-07-24 17:21:56.349688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.364 [2024-07-24 17:21:56.349807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.364 [2024-07-24 17:21:56.349822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.930 17:21:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:10.930 17:21:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:21:10.930 17:21:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:21:10.930 I/O targets: 00:21:10.930 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:21:10.930 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:21:10.930 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:10.930 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:10.930 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:10.930 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:21:10.930 00:21:10.930 00:21:10.930 CUnit - A unit testing framework for C - Version 2.1-3 00:21:10.930 http://cunit.sourceforge.net/ 00:21:10.930 00:21:10.930 00:21:10.930 Suite: bdevio tests on: nvme3n1 00:21:10.930 Test: blockdev write read block ...passed 00:21:10.930 Test: blockdev write zeroes read block ...passed 00:21:10.930 Test: blockdev write zeroes read no split ...passed 00:21:10.930 Test: blockdev write zeroes read split ...passed 00:21:10.930 Test: blockdev write zeroes read split partial ...passed 00:21:10.930 Test: blockdev reset ...passed 00:21:10.930 Test: blockdev write read 8 blocks ...passed 00:21:10.930 Test: blockdev write read size > 128k ...passed 00:21:10.930 Test: blockdev write read invalid size ...passed 00:21:10.930 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:10.930 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:10.930 Test: blockdev write read max offset ...passed 00:21:10.930 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:10.930 Test: blockdev writev readv 8 blocks ...passed 00:21:10.930 Test: blockdev writev readv 30 x 1block ...passed 00:21:10.930 Test: blockdev writev readv block ...passed 00:21:10.930 Test: blockdev writev readv size > 128k ...passed 00:21:10.930 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:10.930 Test: blockdev comparev and writev ...passed 00:21:10.930 Test: blockdev nvme passthru rw ...passed 00:21:10.930 Test: blockdev nvme passthru vendor specific ...passed 00:21:10.930 Test: blockdev nvme admin passthru ...passed 00:21:10.930 Test: blockdev copy ...passed 00:21:10.930 Suite: bdevio tests on: nvme2n3 00:21:10.930 Test: blockdev write read block ...passed 00:21:10.930 Test: blockdev write zeroes read block ...passed 00:21:10.930 Test: blockdev write zeroes read no split ...passed 00:21:10.930 Test: blockdev write zeroes read split ...passed 00:21:10.930 Test: blockdev write zeroes read split partial ...passed 00:21:10.930 Test: blockdev reset ...passed 00:21:10.930 Test: blockdev write read 8 blocks ...passed 00:21:10.930 Test: blockdev write read size > 128k ...passed 00:21:10.930 Test: blockdev write read invalid size ...passed 00:21:10.930 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:10.930 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:10.930 Test: blockdev write read max offset ...passed 00:21:10.930 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:10.930 Test: blockdev writev readv 8 blocks ...passed 00:21:10.930 Test: blockdev writev readv 30 x 1block ...passed 00:21:10.930 Test: blockdev writev readv block ...passed 00:21:10.930 Test: blockdev writev readv size > 128k ...passed 00:21:10.930 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:10.930 Test: blockdev comparev and writev ...passed 00:21:10.930 Test: blockdev nvme passthru rw ...passed 00:21:10.930 Test: blockdev nvme passthru vendor specific ...passed 00:21:10.930 Test: blockdev nvme admin passthru ...passed 00:21:10.930 Test: blockdev copy ...passed 00:21:10.930 Suite: bdevio tests on: nvme2n2 00:21:10.930 Test: blockdev write read block ...passed 00:21:10.930 Test: blockdev write zeroes read block ...passed 00:21:10.930 Test: blockdev write zeroes read no split ...passed 00:21:10.930 Test: blockdev write zeroes read split ...passed 00:21:10.930 Test: blockdev write zeroes read split partial ...passed 00:21:10.930 Test: blockdev reset ...passed 00:21:10.930 Test: blockdev write read 8 blocks ...passed 00:21:10.930 Test: blockdev write read size > 128k ...passed 00:21:10.930 Test: blockdev write read invalid size ...passed 00:21:10.930 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:10.930 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:10.930 Test: blockdev write read max offset ...passed 00:21:10.930 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:10.930 Test: blockdev writev readv 8 blocks ...passed 00:21:10.930 Test: blockdev writev readv 30 x 1block ...passed 00:21:10.930 Test: blockdev writev readv block ...passed 00:21:10.930 Test: blockdev writev readv size > 128k ...passed 00:21:10.930 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:10.930 Test: blockdev comparev and writev ...passed 00:21:10.930 Test: blockdev nvme passthru rw ...passed 00:21:10.930 Test: blockdev nvme passthru vendor specific ...passed 00:21:10.930 Test: blockdev nvme admin passthru ...passed 00:21:10.930 Test: blockdev copy ...passed 00:21:10.930 Suite: bdevio tests on: nvme2n1 00:21:10.930 Test: blockdev write read block ...passed 00:21:10.931 Test: blockdev write zeroes read block ...passed 00:21:10.931 Test: blockdev write zeroes read no split ...passed 00:21:11.189 Test: blockdev write zeroes read split ...passed 00:21:11.189 Test: blockdev write zeroes read split partial ...passed 00:21:11.189 Test: blockdev reset ...passed 00:21:11.189 Test: blockdev write read 8 blocks ...passed 00:21:11.189 Test: blockdev write read size > 128k ...passed 00:21:11.189 Test: blockdev write read invalid size ...passed 00:21:11.189 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:11.189 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:11.189 Test: blockdev write read max offset ...passed 00:21:11.189 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:11.189 Test: blockdev writev readv 8 blocks ...passed 00:21:11.189 Test: blockdev writev readv 30 x 1block ...passed 00:21:11.189 Test: blockdev writev readv block ...passed 00:21:11.189 Test: blockdev writev readv size > 128k ...passed 00:21:11.189 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:11.189 Test: blockdev comparev and writev ...passed 00:21:11.189 Test: blockdev nvme passthru rw ...passed 00:21:11.189 Test: blockdev nvme passthru vendor specific ...passed 00:21:11.189 Test: blockdev nvme admin passthru ...passed 00:21:11.189 Test: blockdev copy ...passed 00:21:11.189 Suite: bdevio tests on: nvme1n1 00:21:11.189 Test: blockdev write read block ...passed 00:21:11.189 Test: blockdev write zeroes read block ...passed 00:21:11.189 Test: blockdev write zeroes read no split ...passed 00:21:11.189 Test: blockdev write zeroes read split ...passed 00:21:11.189 Test: blockdev write zeroes read split partial ...passed 00:21:11.189 Test: blockdev reset ...passed 00:21:11.189 Test: blockdev write read 8 blocks ...passed 00:21:11.189 Test: blockdev write read size > 128k ...passed 00:21:11.189 Test: blockdev write read invalid size ...passed 00:21:11.189 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:11.189 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:11.189 Test: blockdev write read max offset ...passed 00:21:11.189 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:11.189 Test: blockdev writev readv 8 blocks ...passed 00:21:11.189 Test: blockdev writev readv 30 x 1block ...passed 00:21:11.189 Test: blockdev writev readv block ...passed 00:21:11.189 Test: blockdev writev readv size > 128k ...passed 00:21:11.189 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:11.189 Test: blockdev comparev and writev ...passed 00:21:11.189 Test: blockdev nvme passthru rw ...passed 00:21:11.189 Test: blockdev nvme passthru vendor specific ...passed 00:21:11.189 Test: blockdev nvme admin passthru ...passed 00:21:11.189 Test: blockdev copy ...passed 00:21:11.189 Suite: bdevio tests on: nvme0n1 00:21:11.189 Test: blockdev write read block ...passed 00:21:11.189 Test: blockdev write zeroes read block ...passed 00:21:11.189 Test: blockdev write zeroes read no split ...passed 00:21:11.189 Test: blockdev write zeroes read split ...passed 00:21:11.189 Test: blockdev write zeroes read split partial ...passed 00:21:11.189 Test: blockdev reset ...passed 00:21:11.189 Test: blockdev write read 8 blocks ...passed 00:21:11.189 Test: blockdev write read size > 128k ...passed 00:21:11.189 Test: blockdev write read invalid size ...passed 00:21:11.189 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:11.189 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:11.189 Test: blockdev write read max offset ...passed 00:21:11.189 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:11.189 Test: blockdev writev readv 8 blocks ...passed 00:21:11.189 Test: blockdev writev readv 30 x 1block ...passed 00:21:11.189 Test: blockdev writev readv block ...passed 00:21:11.189 Test: blockdev writev readv size > 128k ...passed 00:21:11.189 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:11.189 Test: blockdev comparev and writev ...passed 00:21:11.189 Test: blockdev nvme passthru rw ...passed 00:21:11.189 Test: blockdev nvme passthru vendor specific ...passed 00:21:11.189 Test: blockdev nvme admin passthru ...passed 00:21:11.189 Test: blockdev copy ...passed 00:21:11.189 00:21:11.189 Run Summary: Type Total Ran Passed Failed Inactive 00:21:11.189 suites 6 6 n/a 0 0 00:21:11.189 tests 138 138 138 0 0 00:21:11.189 asserts 780 780 780 0 n/a 00:21:11.189 00:21:11.189 Elapsed time = 1.035 seconds 00:21:11.189 0 00:21:11.189 17:21:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 75083 00:21:11.189 17:21:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 75083 ']' 00:21:11.189 17:21:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 75083 00:21:11.189 17:21:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:21:11.189 17:21:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.189 17:21:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75083 00:21:11.189 17:21:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:11.189 17:21:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:11.189 17:21:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75083' 00:21:11.189 killing process with pid 75083 00:21:11.189 17:21:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 75083 00:21:11.189 17:21:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 75083 00:21:12.563 17:21:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:21:12.563 00:21:12.563 real 0m2.686s 00:21:12.563 user 0m6.219s 00:21:12.563 sys 0m0.415s 00:21:12.563 17:21:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:12.563 17:21:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:12.563 ************************************ 00:21:12.563 END TEST bdev_bounds 00:21:12.563 ************************************ 00:21:12.563 17:21:58 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:21:12.563 17:21:58 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:12.563 17:21:58 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:12.563 17:21:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:12.563 ************************************ 00:21:12.563 START TEST bdev_nbd 00:21:12.563 ************************************ 00:21:12.563 17:21:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:21:12.563 17:21:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:21:12.563 17:21:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:21:12.563 17:21:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:12.563 17:21:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:12.563 17:21:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:21:12.563 17:21:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:21:12.563 17:21:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:21:12.563 17:21:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:21:12.563 17:21:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:21:12.563 17:21:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:21:12.563 17:21:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:21:12.563 17:21:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:12.563 17:21:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:21:12.563 17:21:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:21:12.563 17:21:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:21:12.563 17:21:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=75144 00:21:12.563 17:21:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:21:12.563 17:21:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 75144 /var/tmp/spdk-nbd.sock 00:21:12.564 17:21:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:12.564 17:21:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 75144 ']' 00:21:12.564 17:21:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:12.564 17:21:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:12.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:12.564 17:21:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:12.564 17:21:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:12.564 17:21:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:12.564 [2024-07-24 17:21:58.690726] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:12.564 [2024-07-24 17:21:58.690985] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.822 [2024-07-24 17:21:58.866940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.081 [2024-07-24 17:21:59.088912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.340 17:21:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:13.340 17:21:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:21:13.340 17:21:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:21:13.340 17:21:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:13.340 17:21:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:21:13.340 17:21:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:21:13.340 17:21:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:21:13.340 17:21:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:13.340 17:21:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:21:13.340 17:21:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:21:13.340 17:21:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:21:13.340 17:21:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:21:13.340 17:21:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:21:13.340 17:21:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:13.340 17:21:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:13.599 1+0 records in 00:21:13.599 1+0 records out 00:21:13.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047766 s, 8.6 MB/s 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:13.599 17:21:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:14.166 1+0 records in 00:21:14.166 1+0 records out 00:21:14.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000831559 s, 4.9 MB/s 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:14.166 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:21:14.424 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:21:14.424 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:21:14.424 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:21:14.424 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:21:14.424 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:14.424 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:14.424 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:14.424 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:21:14.424 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:14.424 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:14.424 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:14.424 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:14.424 1+0 records in 00:21:14.424 1+0 records out 00:21:14.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000830561 s, 4.9 MB/s 00:21:14.424 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:14.424 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:14.424 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:14.424 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:14.425 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:14.425 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:14.425 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:14.425 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:21:14.683 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:21:14.683 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:21:14.683 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:21:14.683 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:21:14.683 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:14.683 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:14.683 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:14.683 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:21:14.683 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:14.683 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:14.683 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:14.683 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:14.683 1+0 records in 00:21:14.684 1+0 records out 00:21:14.684 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654388 s, 6.3 MB/s 00:21:14.684 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:14.684 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:14.684 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:14.684 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:14.684 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:14.684 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:14.684 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:14.684 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:21:15.027 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:21:15.027 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:21:15.027 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:21:15.027 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:21:15.027 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:15.027 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:15.027 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:15.028 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:21:15.028 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:15.028 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:15.028 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:15.028 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:15.028 1+0 records in 00:21:15.028 1+0 records out 00:21:15.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000801036 s, 5.1 MB/s 00:21:15.028 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.028 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:15.028 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.028 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:15.028 17:22:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:15.028 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:15.028 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:15.028 17:22:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:21:15.028 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:21:15.028 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:21:15.028 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:21:15.028 17:22:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:21:15.028 17:22:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:15.028 17:22:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:15.028 17:22:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:15.028 17:22:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:21:15.285 17:22:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:15.285 17:22:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:15.285 17:22:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:15.285 17:22:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:15.285 1+0 records in 00:21:15.285 1+0 records out 00:21:15.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000859988 s, 4.8 MB/s 00:21:15.285 17:22:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.285 17:22:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:15.285 17:22:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.285 17:22:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:15.285 17:22:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:15.285 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:15.285 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:15.285 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:15.543 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:21:15.543 { 00:21:15.543 "nbd_device": "/dev/nbd0", 00:21:15.543 "bdev_name": "nvme0n1" 00:21:15.543 }, 00:21:15.543 { 00:21:15.543 "nbd_device": "/dev/nbd1", 00:21:15.543 "bdev_name": "nvme1n1" 00:21:15.543 }, 00:21:15.543 { 00:21:15.543 "nbd_device": "/dev/nbd2", 00:21:15.543 "bdev_name": "nvme2n1" 00:21:15.543 }, 00:21:15.543 { 00:21:15.543 "nbd_device": "/dev/nbd3", 00:21:15.543 "bdev_name": "nvme2n2" 00:21:15.543 }, 00:21:15.543 { 00:21:15.543 "nbd_device": "/dev/nbd4", 00:21:15.543 "bdev_name": "nvme2n3" 00:21:15.543 }, 00:21:15.543 { 00:21:15.543 "nbd_device": "/dev/nbd5", 00:21:15.543 "bdev_name": "nvme3n1" 00:21:15.543 } 00:21:15.543 ]' 00:21:15.543 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:21:15.543 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:21:15.543 { 00:21:15.543 "nbd_device": "/dev/nbd0", 00:21:15.543 "bdev_name": "nvme0n1" 00:21:15.543 }, 00:21:15.543 { 00:21:15.543 "nbd_device": "/dev/nbd1", 00:21:15.543 "bdev_name": "nvme1n1" 00:21:15.543 }, 00:21:15.543 { 00:21:15.543 "nbd_device": "/dev/nbd2", 00:21:15.543 "bdev_name": "nvme2n1" 00:21:15.543 }, 00:21:15.543 { 00:21:15.543 "nbd_device": "/dev/nbd3", 00:21:15.543 "bdev_name": "nvme2n2" 00:21:15.543 }, 00:21:15.543 { 00:21:15.543 "nbd_device": "/dev/nbd4", 00:21:15.543 "bdev_name": "nvme2n3" 00:21:15.543 }, 00:21:15.543 { 00:21:15.543 "nbd_device": "/dev/nbd5", 00:21:15.543 "bdev_name": "nvme3n1" 00:21:15.543 } 00:21:15.543 ]' 00:21:15.543 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:21:15.543 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:21:15.543 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:15.543 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:21:15.543 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:15.543 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:15.543 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:15.543 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:15.801 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:15.801 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:15.801 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:15.801 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:15.801 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:15.801 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:15.801 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:15.801 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:15.801 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:15.802 17:22:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:16.060 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:16.060 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:16.060 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:16.060 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.060 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.061 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:16.061 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:16.061 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.061 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:16.061 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:21:16.320 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:21:16.320 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:21:16.320 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:21:16.320 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.320 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.320 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:21:16.320 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:16.320 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.320 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:16.320 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:21:16.578 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:21:16.578 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:21:16.578 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:21:16.578 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.578 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.578 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:21:16.578 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:16.578 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.578 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:16.578 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:21:16.578 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:21:16.836 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:21:16.836 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:21:16.836 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.836 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.836 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:21:16.836 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:16.836 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.836 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:16.836 17:22:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:21:17.094 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:21:17.094 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:21:17.094 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:21:17.094 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:17.094 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:17.094 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:21:17.094 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:17.094 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:17.094 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:17.094 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:17.094 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:17.094 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:17.094 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:17.094 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:17.352 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:21:17.610 /dev/nbd0 00:21:17.610 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:17.610 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:17.610 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:17.610 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:17.610 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:17.610 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:17.610 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:17.610 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:17.610 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:17.610 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:17.610 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:17.610 1+0 records in 00:21:17.610 1+0 records out 00:21:17.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729803 s, 5.6 MB/s 00:21:17.610 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:17.610 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:17.610 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:17.610 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:17.610 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:17.610 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:17.610 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:17.610 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:21:17.868 /dev/nbd1 00:21:17.868 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:17.868 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:17.868 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:21:17.868 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:17.868 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:17.868 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:17.868 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:21:17.869 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:17.869 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:17.869 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:17.869 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:17.869 1+0 records in 00:21:17.869 1+0 records out 00:21:17.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000876813 s, 4.7 MB/s 00:21:17.869 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:17.869 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:17.869 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:17.869 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:17.869 17:22:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:17.869 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:17.869 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:17.869 17:22:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:21:18.127 /dev/nbd10 00:21:18.127 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:21:18.127 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:21:18.127 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:21:18.127 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:18.127 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:18.127 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:18.127 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:21:18.127 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:18.127 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:18.127 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:18.127 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:18.127 1+0 records in 00:21:18.127 1+0 records out 00:21:18.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000880051 s, 4.7 MB/s 00:21:18.127 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.127 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:18.127 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.127 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:18.127 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:18.127 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:18.127 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:18.127 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:21:18.386 /dev/nbd11 00:21:18.386 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:21:18.386 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:21:18.386 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:21:18.386 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:18.386 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:18.386 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:18.386 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:21:18.386 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:18.386 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:18.386 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:18.386 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:18.386 1+0 records in 00:21:18.386 1+0 records out 00:21:18.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610658 s, 6.7 MB/s 00:21:18.386 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.386 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:18.386 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.386 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:18.386 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:18.386 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:18.386 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:18.386 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:21:18.645 /dev/nbd12 00:21:18.645 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:21:18.645 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:21:18.645 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:21:18.645 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:18.645 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:18.645 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:18.645 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:21:18.645 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:18.645 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:18.645 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:18.645 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:18.645 1+0 records in 00:21:18.645 1+0 records out 00:21:18.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000852656 s, 4.8 MB/s 00:21:18.645 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.645 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:18.645 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.645 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:18.645 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:18.645 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:18.645 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:18.645 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:21:18.904 /dev/nbd13 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:18.904 1+0 records in 00:21:18.904 1+0 records out 00:21:18.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000793255 s, 5.2 MB/s 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:18.904 17:22:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:19.163 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:19.163 { 00:21:19.163 "nbd_device": "/dev/nbd0", 00:21:19.163 "bdev_name": "nvme0n1" 00:21:19.163 }, 00:21:19.163 { 00:21:19.163 "nbd_device": "/dev/nbd1", 00:21:19.163 "bdev_name": "nvme1n1" 00:21:19.163 }, 00:21:19.163 { 00:21:19.163 "nbd_device": "/dev/nbd10", 00:21:19.163 "bdev_name": "nvme2n1" 00:21:19.163 }, 00:21:19.163 { 00:21:19.163 "nbd_device": "/dev/nbd11", 00:21:19.163 "bdev_name": "nvme2n2" 00:21:19.163 }, 00:21:19.163 { 00:21:19.163 "nbd_device": "/dev/nbd12", 00:21:19.163 "bdev_name": "nvme2n3" 00:21:19.163 }, 00:21:19.163 { 00:21:19.163 "nbd_device": "/dev/nbd13", 00:21:19.163 "bdev_name": "nvme3n1" 00:21:19.163 } 00:21:19.163 ]' 00:21:19.163 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:19.163 { 00:21:19.163 "nbd_device": "/dev/nbd0", 00:21:19.163 "bdev_name": "nvme0n1" 00:21:19.163 }, 00:21:19.163 { 00:21:19.163 "nbd_device": "/dev/nbd1", 00:21:19.163 "bdev_name": "nvme1n1" 00:21:19.163 }, 00:21:19.163 { 00:21:19.163 "nbd_device": "/dev/nbd10", 00:21:19.163 "bdev_name": "nvme2n1" 00:21:19.163 }, 00:21:19.163 { 00:21:19.163 "nbd_device": "/dev/nbd11", 00:21:19.163 "bdev_name": "nvme2n2" 00:21:19.163 }, 00:21:19.163 { 00:21:19.163 "nbd_device": "/dev/nbd12", 00:21:19.163 "bdev_name": "nvme2n3" 00:21:19.163 }, 00:21:19.163 { 00:21:19.163 "nbd_device": "/dev/nbd13", 00:21:19.163 "bdev_name": "nvme3n1" 00:21:19.163 } 00:21:19.163 ]' 00:21:19.163 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:19.163 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:21:19.163 /dev/nbd1 00:21:19.163 /dev/nbd10 00:21:19.163 /dev/nbd11 00:21:19.163 /dev/nbd12 00:21:19.163 /dev/nbd13' 00:21:19.163 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:21:19.163 /dev/nbd1 00:21:19.163 /dev/nbd10 00:21:19.163 /dev/nbd11 00:21:19.163 /dev/nbd12 00:21:19.163 /dev/nbd13' 00:21:19.163 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:19.163 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:21:19.163 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:21:19.163 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:21:19.163 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:21:19.163 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:21:19.163 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:19.163 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:19.163 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:19.163 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:19.163 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:19.163 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:21:19.163 256+0 records in 00:21:19.163 256+0 records out 00:21:19.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00637672 s, 164 MB/s 00:21:19.163 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:19.163 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:19.422 256+0 records in 00:21:19.422 256+0 records out 00:21:19.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174405 s, 6.0 MB/s 00:21:19.422 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:19.422 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:21:19.680 256+0 records in 00:21:19.680 256+0 records out 00:21:19.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191017 s, 5.5 MB/s 00:21:19.680 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:19.680 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:21:19.680 256+0 records in 00:21:19.680 256+0 records out 00:21:19.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160039 s, 6.6 MB/s 00:21:19.680 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:19.680 17:22:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:21:19.938 256+0 records in 00:21:19.938 256+0 records out 00:21:19.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176677 s, 5.9 MB/s 00:21:19.938 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:19.938 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:21:20.197 256+0 records in 00:21:20.197 256+0 records out 00:21:20.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189487 s, 5.5 MB/s 00:21:20.197 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:20.197 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:21:20.197 256+0 records in 00:21:20.197 256+0 records out 00:21:20.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191468 s, 5.5 MB/s 00:21:20.197 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:21:20.197 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:20.197 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:20.197 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:20.197 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:20.197 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:20.197 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:20.197 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:20.197 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:21:20.456 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:20.456 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:21:20.456 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:20.456 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:21:20.456 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:20.456 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:21:20.456 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:20.456 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:21:20.456 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:20.456 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:21:20.456 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:20.456 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:21:20.456 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:20.456 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:20.456 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:20.456 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:20.456 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:20.456 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:20.714 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:20.714 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:20.715 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:20.715 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:20.715 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:20.715 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:20.715 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:20.715 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:20.715 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:20.715 17:22:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:20.973 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:20.973 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:20.973 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:20.973 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:20.973 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:20.973 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:20.973 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:20.973 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:20.973 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:20.973 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:21:21.232 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:21:21.232 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:21:21.232 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:21:21.232 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:21.232 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:21.232 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:21:21.232 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:21.232 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:21.232 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:21.232 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:21:21.491 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:21:21.491 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:21:21.491 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:21:21.491 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:21.491 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:21.491 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:21:21.491 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:21.491 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:21.491 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:21.491 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:21:21.765 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:21:21.765 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:21:21.765 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:21:21.765 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:21.765 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:21.765 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:21:21.765 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:21.765 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:21.765 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:21.765 17:22:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:21:22.034 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:21:22.034 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:21:22.034 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:21:22.034 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:22.034 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:22.034 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:21:22.034 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:22.034 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:22.034 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:22.034 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:22.034 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:22.292 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:22.292 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:22.292 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:22.292 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:22.292 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:22.292 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:22.292 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:22.292 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:22.292 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:22.292 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:21:22.292 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:22.292 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:21:22.292 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:21:22.292 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:22.292 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:22.292 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:21:22.292 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:21:22.292 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:21:22.549 malloc_lvol_verify 00:21:22.549 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:21:22.807 a3133acf-b335-4455-99c0-37d0a8a3a93d 00:21:22.807 17:22:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:21:23.063 0b505687-1af6-478d-b057-c6f7c8219a7e 00:21:23.063 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:21:23.320 /dev/nbd0 00:21:23.320 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:21:23.320 mke2fs 1.46.5 (30-Dec-2021) 00:21:23.320 Discarding device blocks: 0/4096 done 00:21:23.320 Creating filesystem with 4096 1k blocks and 1024 inodes 00:21:23.320 00:21:23.320 Allocating group tables: 0/1 done 00:21:23.320 Writing inode tables: 0/1 done 00:21:23.320 Creating journal (1024 blocks): done 00:21:23.320 Writing superblocks and filesystem accounting information: 0/1 done 00:21:23.320 00:21:23.320 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:21:23.320 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:23.320 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:23.320 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:23.320 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:23.320 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:23.320 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:23.320 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 75144 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 75144 ']' 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 75144 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75144 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:23.578 killing process with pid 75144 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75144' 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 75144 00:21:23.578 17:22:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 75144 00:21:24.950 17:22:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:21:24.950 00:21:24.950 real 0m12.271s 00:21:24.950 user 0m16.886s 00:21:24.950 sys 0m4.251s 00:21:24.950 17:22:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:24.950 17:22:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:24.950 ************************************ 00:21:24.950 END TEST bdev_nbd 00:21:24.950 ************************************ 00:21:24.950 17:22:10 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:21:24.950 17:22:10 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:21:24.950 17:22:10 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:21:24.950 17:22:10 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:21:24.950 17:22:10 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:24.950 17:22:10 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:24.950 17:22:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:24.950 ************************************ 00:21:24.950 START TEST bdev_fio 00:21:24.950 ************************************ 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:21:24.950 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:24.950 ************************************ 00:21:24.950 START TEST bdev_fio_rw_verify 00:21:24.950 ************************************ 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:21:24.950 17:22:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:24.950 17:22:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:24.950 17:22:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:24.950 17:22:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:21:24.950 17:22:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:24.951 17:22:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:25.209 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:25.209 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:25.209 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:25.209 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:25.209 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:25.209 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:25.209 fio-3.35 00:21:25.209 Starting 6 threads 00:21:37.407 00:21:37.407 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=75565: Wed Jul 24 17:22:22 2024 00:21:37.407 read: IOPS=27.8k, BW=109MiB/s (114MB/s)(1087MiB/10001msec) 00:21:37.407 slat (usec): min=3, max=1669, avg= 8.26, stdev= 7.08 00:21:37.407 clat (usec): min=89, max=5312, avg=653.28, stdev=250.23 00:21:37.407 lat (usec): min=93, max=5622, avg=661.53, stdev=251.41 00:21:37.407 clat percentiles (usec): 00:21:37.407 | 50.000th=[ 660], 99.000th=[ 1237], 99.900th=[ 1778], 99.990th=[ 3720], 00:21:37.407 | 99.999th=[ 5276] 00:21:37.407 write: IOPS=28.2k, BW=110MiB/s (115MB/s)(1100MiB/10001msec); 0 zone resets 00:21:37.407 slat (usec): min=12, max=5436, avg=29.04, stdev=35.94 00:21:37.407 clat (usec): min=93, max=6895, avg=775.47, stdev=263.51 00:21:37.407 lat (usec): min=114, max=6920, avg=804.50, stdev=266.70 00:21:37.407 clat percentiles (usec): 00:21:37.407 | 50.000th=[ 783], 99.000th=[ 1467], 99.900th=[ 2073], 99.990th=[ 3982], 00:21:37.407 | 99.999th=[ 6849] 00:21:37.407 bw ( KiB/s): min=96148, max=139929, per=100.00%, avg=113559.37, stdev=2345.72, samples=114 00:21:37.407 iops : min=24036, max=34982, avg=28389.37, stdev=586.43, samples=114 00:21:37.407 lat (usec) : 100=0.01%, 250=3.10%, 500=18.41%, 750=32.54%, 1000=34.38% 00:21:37.407 lat (msec) : 2=11.48%, 4=0.09%, 10=0.01% 00:21:37.407 cpu : usr=58.47%, sys=26.82%, ctx=7652, majf=0, minf=23893 00:21:37.407 IO depths : 1=11.4%, 2=23.8%, 4=51.1%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:37.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.407 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.407 issued rwts: total=278271,281596,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.407 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:37.407 00:21:37.407 Run status group 0 (all jobs): 00:21:37.407 READ: bw=109MiB/s (114MB/s), 109MiB/s-109MiB/s (114MB/s-114MB/s), io=1087MiB (1140MB), run=10001-10001msec 00:21:37.407 WRITE: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=1100MiB (1153MB), run=10001-10001msec 00:21:37.407 ----------------------------------------------------- 00:21:37.407 Suppressions used: 00:21:37.407 count bytes template 00:21:37.407 6 48 /usr/src/fio/parse.c 00:21:37.407 3126 300096 /usr/src/fio/iolog.c 00:21:37.407 1 8 libtcmalloc_minimal.so 00:21:37.407 1 904 libcrypto.so 00:21:37.407 ----------------------------------------------------- 00:21:37.407 00:21:37.407 00:21:37.407 real 0m12.400s 00:21:37.407 user 0m36.955s 00:21:37.407 sys 0m16.491s 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:37.407 ************************************ 00:21:37.407 END TEST bdev_fio_rw_verify 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:37.407 ************************************ 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "82b09649-3732-4334-b2c9-0de8ea173185"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "82b09649-3732-4334-b2c9-0de8ea173185",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "03c89791-cc92-431e-9fa1-d7276b72fce2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "03c89791-cc92-431e-9fa1-d7276b72fce2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "8d69c49c-3588-4577-827b-788524064a5c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8d69c49c-3588-4577-827b-788524064a5c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "c8104883-7793-4eef-aa08-8cd28127646f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c8104883-7793-4eef-aa08-8cd28127646f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "9049c42f-18d1-498b-bb9e-3f61de28fa1c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9049c42f-18d1-498b-bb9e-3f61de28fa1c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "049ec522-b3ea-4aca-95cf-d9daba8f4bed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "049ec522-b3ea-4aca-95cf-d9daba8f4bed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:37.407 /home/vagrant/spdk_repo/spdk 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:37.407 00:21:37.407 real 0m12.584s 00:21:37.407 user 0m37.058s 00:21:37.407 sys 0m16.572s 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:37.407 17:22:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:37.407 ************************************ 00:21:37.407 END TEST bdev_fio 00:21:37.407 ************************************ 00:21:37.407 17:22:23 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:37.407 17:22:23 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:37.407 17:22:23 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:21:37.407 17:22:23 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:37.407 17:22:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:37.407 ************************************ 00:21:37.407 START TEST bdev_verify 00:21:37.407 ************************************ 00:21:37.407 17:22:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:37.665 [2024-07-24 17:22:23.651788] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:37.665 [2024-07-24 17:22:23.652013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75741 ] 00:21:37.665 [2024-07-24 17:22:23.832608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:37.924 [2024-07-24 17:22:24.097322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.924 [2024-07-24 17:22:24.097340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.490 Running I/O for 5 seconds... 00:21:43.757 00:21:43.757 Latency(us) 00:21:43.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.757 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:43.757 Verification LBA range: start 0x0 length 0xa0000 00:21:43.757 nvme0n1 : 5.04 1626.40 6.35 0.00 0.00 78548.64 10783.65 82932.83 00:21:43.757 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:43.757 Verification LBA range: start 0xa0000 length 0xa0000 00:21:43.757 nvme0n1 : 5.05 1547.27 6.04 0.00 0.00 82565.84 14358.34 112483.61 00:21:43.757 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:43.758 Verification LBA range: start 0x0 length 0xbd0bd 00:21:43.758 nvme1n1 : 5.07 3019.37 11.79 0.00 0.00 42011.90 5183.30 78643.20 00:21:43.758 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:43.758 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:21:43.758 nvme1n1 : 5.05 2909.90 11.37 0.00 0.00 43707.46 5242.88 91512.09 00:21:43.758 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:43.758 Verification LBA range: start 0x0 length 0x80000 00:21:43.758 nvme2n1 : 5.08 1638.44 6.40 0.00 0.00 77516.21 7566.43 73876.95 00:21:43.758 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:43.758 Verification LBA range: start 0x80000 length 0x80000 00:21:43.758 nvme2n1 : 5.04 1550.50 6.06 0.00 0.00 81908.85 9055.88 113436.86 00:21:43.758 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:43.758 Verification LBA range: start 0x0 length 0x80000 00:21:43.758 nvme2n2 : 5.08 1639.29 6.40 0.00 0.00 77326.47 7298.33 75306.82 00:21:43.758 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:43.758 Verification LBA range: start 0x80000 length 0x80000 00:21:43.758 nvme2n2 : 5.07 1563.85 6.11 0.00 0.00 81044.76 5213.09 90558.84 00:21:43.758 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:43.758 Verification LBA range: start 0x0 length 0x80000 00:21:43.758 nvme2n3 : 5.08 1637.16 6.40 0.00 0.00 77275.16 9651.67 71017.19 00:21:43.758 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:43.758 Verification LBA range: start 0x80000 length 0x80000 00:21:43.758 nvme2n3 : 5.08 1562.80 6.10 0.00 0.00 80942.11 5332.25 118203.11 00:21:43.758 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:43.758 Verification LBA range: start 0x0 length 0x20000 00:21:43.758 nvme3n1 : 5.09 1636.00 6.39 0.00 0.00 77194.20 8102.63 80549.70 00:21:43.758 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:43.758 Verification LBA range: start 0x20000 length 0x20000 00:21:43.758 nvme3n1 : 5.08 1561.62 6.10 0.00 0.00 80858.92 6315.29 125829.12 00:21:43.758 =================================================================================================================== 00:21:43.758 Total : 21892.59 85.52 0.00 0.00 69561.49 5183.30 125829.12 00:21:44.730 00:21:44.730 real 0m7.292s 00:21:44.730 user 0m11.219s 00:21:44.730 sys 0m1.873s 00:21:44.730 17:22:30 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:44.730 17:22:30 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:44.730 ************************************ 00:21:44.730 END TEST bdev_verify 00:21:44.730 ************************************ 00:21:44.730 17:22:30 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:44.730 17:22:30 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:21:44.730 17:22:30 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:44.730 17:22:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:44.730 ************************************ 00:21:44.730 START TEST bdev_verify_big_io 00:21:44.730 ************************************ 00:21:44.730 17:22:30 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:44.995 [2024-07-24 17:22:30.985089] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:44.996 [2024-07-24 17:22:30.985252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75840 ] 00:21:44.996 [2024-07-24 17:22:31.146118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:45.254 [2024-07-24 17:22:31.370830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.254 [2024-07-24 17:22:31.370834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.820 Running I/O for 5 seconds... 00:21:52.414 00:21:52.414 Latency(us) 00:21:52.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.414 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:52.414 Verification LBA range: start 0x0 length 0xa000 00:21:52.414 nvme0n1 : 5.93 102.56 6.41 0.00 0.00 1174779.84 163005.91 1006632.96 00:21:52.414 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:52.414 Verification LBA range: start 0xa000 length 0xa000 00:21:52.414 nvme0n1 : 5.84 87.63 5.48 0.00 0.00 1375284.60 114866.73 2943638.81 00:21:52.414 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:52.414 Verification LBA range: start 0x0 length 0xbd0b 00:21:52.414 nvme1n1 : 5.76 175.12 10.95 0.00 0.00 676396.54 50998.92 861738.82 00:21:52.414 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:52.414 Verification LBA range: start 0xbd0b length 0xbd0b 00:21:52.414 nvme1n1 : 5.94 172.51 10.78 0.00 0.00 702937.77 30384.87 899868.86 00:21:52.414 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:52.414 Verification LBA range: start 0x0 length 0x8000 00:21:52.414 nvme2n1 : 5.95 158.42 9.90 0.00 0.00 739136.91 140127.88 861738.82 00:21:52.414 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:52.414 Verification LBA range: start 0x8000 length 0x8000 00:21:52.414 nvme2n1 : 5.94 140.10 8.76 0.00 0.00 829506.74 118203.11 1433689.37 00:21:52.414 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:52.414 Verification LBA range: start 0x0 length 0x8000 00:21:52.414 nvme2n2 : 5.96 96.71 6.04 0.00 0.00 1175704.67 118679.74 2272550.17 00:21:52.414 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:52.414 Verification LBA range: start 0x8000 length 0x8000 00:21:52.414 nvme2n2 : 5.99 96.19 6.01 0.00 0.00 1168069.15 183977.43 2562338.44 00:21:52.414 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:52.414 Verification LBA range: start 0x0 length 0x8000 00:21:52.414 nvme2n3 : 5.96 126.17 7.89 0.00 0.00 883594.51 20018.27 1891249.80 00:21:52.414 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:52.414 Verification LBA range: start 0x8000 length 0x8000 00:21:52.414 nvme2n3 : 5.98 136.99 8.56 0.00 0.00 795322.02 64821.06 1319299.26 00:21:52.414 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:52.414 Verification LBA range: start 0x0 length 0x2000 00:21:52.414 nvme3n1 : 5.96 118.06 7.38 0.00 0.00 914509.94 13762.56 2425070.31 00:21:52.414 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:52.414 Verification LBA range: start 0x2000 length 0x2000 00:21:52.414 nvme3n1 : 5.99 157.53 9.85 0.00 0.00 677131.57 7387.69 2043769.95 00:21:52.414 =================================================================================================================== 00:21:52.414 Total : 1568.00 98.00 0.00 0.00 877183.94 7387.69 2943638.81 00:21:53.349 00:21:53.349 real 0m8.509s 00:21:53.349 user 0m15.131s 00:21:53.349 sys 0m0.667s 00:21:53.349 17:22:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:53.349 17:22:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:53.349 ************************************ 00:21:53.349 END TEST bdev_verify_big_io 00:21:53.349 ************************************ 00:21:53.349 17:22:39 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:53.349 17:22:39 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:21:53.349 17:22:39 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:53.349 17:22:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:53.349 ************************************ 00:21:53.349 START TEST bdev_write_zeroes 00:21:53.349 ************************************ 00:21:53.349 17:22:39 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:53.349 [2024-07-24 17:22:39.565261] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:53.349 [2024-07-24 17:22:39.565451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75957 ] 00:21:53.607 [2024-07-24 17:22:39.744920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.864 [2024-07-24 17:22:40.044058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.430 Running I/O for 1 seconds... 00:21:55.363 00:21:55.363 Latency(us) 00:21:55.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.363 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:55.363 nvme0n1 : 1.01 8399.23 32.81 0.00 0.00 15218.71 7685.59 34317.03 00:21:55.363 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:55.363 nvme1n1 : 1.01 13810.08 53.95 0.00 0.00 9214.86 4230.05 22878.02 00:21:55.363 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:55.363 nvme2n1 : 1.02 8422.34 32.90 0.00 0.00 15051.25 6911.07 33840.41 00:21:55.363 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:55.363 nvme2n2 : 1.02 8409.15 32.85 0.00 0.00 15044.26 7387.69 32648.84 00:21:55.363 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:55.363 nvme2n3 : 1.02 8396.68 32.80 0.00 0.00 15036.92 7685.59 32410.53 00:21:55.363 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:55.363 nvme3n1 : 1.02 8383.15 32.75 0.00 0.00 15031.32 8221.79 33363.78 00:21:55.363 =================================================================================================================== 00:21:55.363 Total : 55820.63 218.05 0.00 0.00 13628.94 4230.05 34317.03 00:21:56.735 00:21:56.735 real 0m3.328s 00:21:56.735 user 0m2.504s 00:21:56.735 sys 0m0.646s 00:21:56.735 17:22:42 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:56.735 17:22:42 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:56.735 ************************************ 00:21:56.735 END TEST bdev_write_zeroes 00:21:56.735 ************************************ 00:21:56.735 17:22:42 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:56.735 17:22:42 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:21:56.735 17:22:42 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:56.735 17:22:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:56.735 ************************************ 00:21:56.735 START TEST bdev_json_nonenclosed 00:21:56.735 ************************************ 00:21:56.735 17:22:42 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:56.735 [2024-07-24 17:22:42.947433] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:56.735 [2024-07-24 17:22:42.947626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76016 ] 00:21:56.992 [2024-07-24 17:22:43.124064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.249 [2024-07-24 17:22:43.371136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.249 [2024-07-24 17:22:43.371244] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:57.249 [2024-07-24 17:22:43.371279] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:57.249 [2024-07-24 17:22:43.371298] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:57.814 00:21:57.814 real 0m0.949s 00:21:57.814 user 0m0.674s 00:21:57.814 sys 0m0.167s 00:21:57.814 17:22:43 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:57.814 17:22:43 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:57.814 ************************************ 00:21:57.814 END TEST bdev_json_nonenclosed 00:21:57.814 ************************************ 00:21:57.814 17:22:43 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:57.814 17:22:43 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:21:57.814 17:22:43 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:57.814 17:22:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:57.814 ************************************ 00:21:57.814 START TEST bdev_json_nonarray 00:21:57.814 ************************************ 00:21:57.814 17:22:43 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:57.814 [2024-07-24 17:22:43.933537] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:57.814 [2024-07-24 17:22:43.933733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76047 ] 00:21:58.072 [2024-07-24 17:22:44.098891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.330 [2024-07-24 17:22:44.368305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.330 [2024-07-24 17:22:44.368447] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:58.330 [2024-07-24 17:22:44.368478] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:58.330 [2024-07-24 17:22:44.368495] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:58.588 00:21:58.588 real 0m0.889s 00:21:58.588 user 0m0.642s 00:21:58.588 sys 0m0.141s 00:21:58.588 17:22:44 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:58.588 17:22:44 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:58.588 ************************************ 00:21:58.588 END TEST bdev_json_nonarray 00:21:58.588 ************************************ 00:21:58.588 17:22:44 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:21:58.588 17:22:44 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:21:58.588 17:22:44 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:21:58.588 17:22:44 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:21:58.588 17:22:44 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:21:58.588 17:22:44 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:58.588 17:22:44 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:58.588 17:22:44 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:21:58.588 17:22:44 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:21:58.588 17:22:44 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:21:58.588 17:22:44 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:21:58.588 17:22:44 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:59.155 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:03.355 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:03.921 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:22:03.921 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:03.921 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:22:03.921 00:22:03.921 real 1m6.118s 00:22:03.921 user 1m42.709s 00:22:03.921 sys 0m32.727s 00:22:03.921 17:22:50 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:03.921 17:22:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:03.921 ************************************ 00:22:03.921 END TEST blockdev_xnvme 00:22:03.921 ************************************ 00:22:03.921 17:22:50 -- spdk/autotest.sh@255 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:22:03.921 17:22:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:03.921 17:22:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:03.921 17:22:50 -- common/autotest_common.sh@10 -- # set +x 00:22:03.921 ************************************ 00:22:03.921 START TEST ublk 00:22:03.921 ************************************ 00:22:03.921 17:22:50 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:22:04.180 * Looking for test storage... 00:22:04.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:22:04.180 17:22:50 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:22:04.180 17:22:50 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:22:04.180 17:22:50 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:22:04.180 17:22:50 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:22:04.180 17:22:50 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:22:04.180 17:22:50 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:22:04.180 17:22:50 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:22:04.180 17:22:50 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:22:04.180 17:22:50 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:22:04.180 17:22:50 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:22:04.180 17:22:50 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:22:04.180 17:22:50 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:22:04.180 17:22:50 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:22:04.180 17:22:50 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:22:04.180 17:22:50 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:22:04.180 17:22:50 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:22:04.180 17:22:50 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:22:04.180 17:22:50 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:22:04.180 17:22:50 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:22:04.180 17:22:50 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:22:04.180 17:22:50 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:04.180 17:22:50 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:04.180 17:22:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:04.180 ************************************ 00:22:04.180 START TEST test_save_ublk_config 00:22:04.180 ************************************ 00:22:04.180 17:22:50 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:22:04.180 17:22:50 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:22:04.180 17:22:50 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=76340 00:22:04.180 17:22:50 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:22:04.180 17:22:50 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:22:04.180 17:22:50 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 76340 00:22:04.180 17:22:50 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 76340 ']' 00:22:04.180 17:22:50 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.180 17:22:50 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:04.180 17:22:50 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.180 17:22:50 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:04.180 17:22:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:04.180 [2024-07-24 17:22:50.339123] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:22:04.180 [2024-07-24 17:22:50.339339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76340 ] 00:22:04.439 [2024-07-24 17:22:50.513781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.698 [2024-07-24 17:22:50.751742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.633 17:22:51 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:05.633 17:22:51 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:22:05.633 17:22:51 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:22:05.633 17:22:51 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:22:05.633 17:22:51 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.633 17:22:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:05.633 [2024-07-24 17:22:51.528762] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:05.633 [2024-07-24 17:22:51.530050] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:05.633 malloc0 00:22:05.633 [2024-07-24 17:22:51.611897] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:22:05.633 [2024-07-24 17:22:51.612063] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:22:05.633 [2024-07-24 17:22:51.612077] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:05.633 [2024-07-24 17:22:51.612088] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:05.633 [2024-07-24 17:22:51.619772] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:05.633 [2024-07-24 17:22:51.619810] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:05.633 [2024-07-24 17:22:51.627778] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:05.633 [2024-07-24 17:22:51.627923] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:05.633 [2024-07-24 17:22:51.651712] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:05.633 0 00:22:05.633 17:22:51 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.633 17:22:51 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:22:05.633 17:22:51 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.633 17:22:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:05.892 17:22:51 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.892 17:22:51 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:22:05.892 "subsystems": [ 00:22:05.892 { 00:22:05.892 "subsystem": "keyring", 00:22:05.892 "config": [] 00:22:05.892 }, 00:22:05.892 { 00:22:05.892 "subsystem": "iobuf", 00:22:05.892 "config": [ 00:22:05.892 { 00:22:05.892 "method": "iobuf_set_options", 00:22:05.892 "params": { 00:22:05.892 "small_pool_count": 8192, 00:22:05.892 "large_pool_count": 1024, 00:22:05.892 "small_bufsize": 8192, 00:22:05.892 "large_bufsize": 135168 00:22:05.892 } 00:22:05.892 } 00:22:05.892 ] 00:22:05.892 }, 00:22:05.892 { 00:22:05.892 "subsystem": "sock", 00:22:05.892 "config": [ 00:22:05.892 { 00:22:05.892 "method": "sock_set_default_impl", 00:22:05.892 "params": { 00:22:05.892 "impl_name": "posix" 00:22:05.892 } 00:22:05.892 }, 00:22:05.892 { 00:22:05.892 "method": "sock_impl_set_options", 00:22:05.892 "params": { 00:22:05.892 "impl_name": "ssl", 00:22:05.892 "recv_buf_size": 4096, 00:22:05.892 "send_buf_size": 4096, 00:22:05.892 "enable_recv_pipe": true, 00:22:05.892 "enable_quickack": false, 00:22:05.892 "enable_placement_id": 0, 00:22:05.892 "enable_zerocopy_send_server": true, 00:22:05.892 "enable_zerocopy_send_client": false, 00:22:05.892 "zerocopy_threshold": 0, 00:22:05.892 "tls_version": 0, 00:22:05.892 "enable_ktls": false 00:22:05.892 } 00:22:05.892 }, 00:22:05.892 { 00:22:05.892 "method": "sock_impl_set_options", 00:22:05.892 "params": { 00:22:05.892 "impl_name": "posix", 00:22:05.892 "recv_buf_size": 2097152, 00:22:05.892 "send_buf_size": 2097152, 00:22:05.892 "enable_recv_pipe": true, 00:22:05.892 "enable_quickack": false, 00:22:05.892 "enable_placement_id": 0, 00:22:05.892 "enable_zerocopy_send_server": true, 00:22:05.892 "enable_zerocopy_send_client": false, 00:22:05.892 "zerocopy_threshold": 0, 00:22:05.892 "tls_version": 0, 00:22:05.892 "enable_ktls": false 00:22:05.892 } 00:22:05.892 } 00:22:05.892 ] 00:22:05.892 }, 00:22:05.892 { 00:22:05.892 "subsystem": "vmd", 00:22:05.892 "config": [] 00:22:05.892 }, 00:22:05.892 { 00:22:05.892 "subsystem": "accel", 00:22:05.892 "config": [ 00:22:05.892 { 00:22:05.892 "method": "accel_set_options", 00:22:05.892 "params": { 00:22:05.892 "small_cache_size": 128, 00:22:05.892 "large_cache_size": 16, 00:22:05.892 "task_count": 2048, 00:22:05.892 "sequence_count": 2048, 00:22:05.892 "buf_count": 2048 00:22:05.892 } 00:22:05.892 } 00:22:05.892 ] 00:22:05.892 }, 00:22:05.892 { 00:22:05.892 "subsystem": "bdev", 00:22:05.892 "config": [ 00:22:05.892 { 00:22:05.892 "method": "bdev_set_options", 00:22:05.892 "params": { 00:22:05.892 "bdev_io_pool_size": 65535, 00:22:05.892 "bdev_io_cache_size": 256, 00:22:05.892 "bdev_auto_examine": true, 00:22:05.892 "iobuf_small_cache_size": 128, 00:22:05.892 "iobuf_large_cache_size": 16 00:22:05.892 } 00:22:05.892 }, 00:22:05.892 { 00:22:05.892 "method": "bdev_raid_set_options", 00:22:05.892 "params": { 00:22:05.892 "process_window_size_kb": 1024, 00:22:05.892 "process_max_bandwidth_mb_sec": 0 00:22:05.892 } 00:22:05.892 }, 00:22:05.892 { 00:22:05.892 "method": "bdev_iscsi_set_options", 00:22:05.892 "params": { 00:22:05.892 "timeout_sec": 30 00:22:05.892 } 00:22:05.892 }, 00:22:05.892 { 00:22:05.892 "method": "bdev_nvme_set_options", 00:22:05.892 "params": { 00:22:05.892 "action_on_timeout": "none", 00:22:05.892 "timeout_us": 0, 00:22:05.892 "timeout_admin_us": 0, 00:22:05.892 "keep_alive_timeout_ms": 10000, 00:22:05.892 "arbitration_burst": 0, 00:22:05.892 "low_priority_weight": 0, 00:22:05.892 "medium_priority_weight": 0, 00:22:05.892 "high_priority_weight": 0, 00:22:05.892 "nvme_adminq_poll_period_us": 10000, 00:22:05.892 "nvme_ioq_poll_period_us": 0, 00:22:05.892 "io_queue_requests": 0, 00:22:05.892 "delay_cmd_submit": true, 00:22:05.892 "transport_retry_count": 4, 00:22:05.892 "bdev_retry_count": 3, 00:22:05.892 "transport_ack_timeout": 0, 00:22:05.892 "ctrlr_loss_timeout_sec": 0, 00:22:05.892 "reconnect_delay_sec": 0, 00:22:05.892 "fast_io_fail_timeout_sec": 0, 00:22:05.892 "disable_auto_failback": false, 00:22:05.892 "generate_uuids": false, 00:22:05.892 "transport_tos": 0, 00:22:05.892 "nvme_error_stat": false, 00:22:05.892 "rdma_srq_size": 0, 00:22:05.892 "io_path_stat": false, 00:22:05.892 "allow_accel_sequence": false, 00:22:05.892 "rdma_max_cq_size": 0, 00:22:05.892 "rdma_cm_event_timeout_ms": 0, 00:22:05.892 "dhchap_digests": [ 00:22:05.892 "sha256", 00:22:05.892 "sha384", 00:22:05.892 "sha512" 00:22:05.892 ], 00:22:05.892 "dhchap_dhgroups": [ 00:22:05.892 "null", 00:22:05.892 "ffdhe2048", 00:22:05.892 "ffdhe3072", 00:22:05.892 "ffdhe4096", 00:22:05.892 "ffdhe6144", 00:22:05.892 "ffdhe8192" 00:22:05.892 ] 00:22:05.892 } 00:22:05.892 }, 00:22:05.892 { 00:22:05.892 "method": "bdev_nvme_set_hotplug", 00:22:05.892 "params": { 00:22:05.892 "period_us": 100000, 00:22:05.892 "enable": false 00:22:05.892 } 00:22:05.892 }, 00:22:05.892 { 00:22:05.892 "method": "bdev_malloc_create", 00:22:05.892 "params": { 00:22:05.892 "name": "malloc0", 00:22:05.892 "num_blocks": 8192, 00:22:05.892 "block_size": 4096, 00:22:05.892 "physical_block_size": 4096, 00:22:05.892 "uuid": "5a5307d7-a36a-428d-8fe6-bb7630321424", 00:22:05.892 "optimal_io_boundary": 0, 00:22:05.892 "md_size": 0, 00:22:05.892 "dif_type": 0, 00:22:05.892 "dif_is_head_of_md": false, 00:22:05.892 "dif_pi_format": 0 00:22:05.892 } 00:22:05.892 }, 00:22:05.892 { 00:22:05.892 "method": "bdev_wait_for_examine" 00:22:05.892 } 00:22:05.892 ] 00:22:05.892 }, 00:22:05.892 { 00:22:05.892 "subsystem": "scsi", 00:22:05.892 "config": null 00:22:05.892 }, 00:22:05.892 { 00:22:05.892 "subsystem": "scheduler", 00:22:05.892 "config": [ 00:22:05.892 { 00:22:05.892 "method": "framework_set_scheduler", 00:22:05.892 "params": { 00:22:05.892 "name": "static" 00:22:05.892 } 00:22:05.892 } 00:22:05.892 ] 00:22:05.892 }, 00:22:05.892 { 00:22:05.892 "subsystem": "vhost_scsi", 00:22:05.892 "config": [] 00:22:05.892 }, 00:22:05.892 { 00:22:05.892 "subsystem": "vhost_blk", 00:22:05.892 "config": [] 00:22:05.892 }, 00:22:05.892 { 00:22:05.892 "subsystem": "ublk", 00:22:05.892 "config": [ 00:22:05.892 { 00:22:05.892 "method": "ublk_create_target", 00:22:05.892 "params": { 00:22:05.892 "cpumask": "1" 00:22:05.892 } 00:22:05.892 }, 00:22:05.892 { 00:22:05.892 "method": "ublk_start_disk", 00:22:05.892 "params": { 00:22:05.892 "bdev_name": "malloc0", 00:22:05.892 "ublk_id": 0, 00:22:05.893 "num_queues": 1, 00:22:05.893 "queue_depth": 128 00:22:05.893 } 00:22:05.893 } 00:22:05.893 ] 00:22:05.893 }, 00:22:05.893 { 00:22:05.893 "subsystem": "nbd", 00:22:05.893 "config": [] 00:22:05.893 }, 00:22:05.893 { 00:22:05.893 "subsystem": "nvmf", 00:22:05.893 "config": [ 00:22:05.893 { 00:22:05.893 "method": "nvmf_set_config", 00:22:05.893 "params": { 00:22:05.893 "discovery_filter": "match_any", 00:22:05.893 "admin_cmd_passthru": { 00:22:05.893 "identify_ctrlr": false 00:22:05.893 } 00:22:05.893 } 00:22:05.893 }, 00:22:05.893 { 00:22:05.893 "method": "nvmf_set_max_subsystems", 00:22:05.893 "params": { 00:22:05.893 "max_subsystems": 1024 00:22:05.893 } 00:22:05.893 }, 00:22:05.893 { 00:22:05.893 "method": "nvmf_set_crdt", 00:22:05.893 "params": { 00:22:05.893 "crdt1": 0, 00:22:05.893 "crdt2": 0, 00:22:05.893 "crdt3": 0 00:22:05.893 } 00:22:05.893 } 00:22:05.893 ] 00:22:05.893 }, 00:22:05.893 { 00:22:05.893 "subsystem": "iscsi", 00:22:05.893 "config": [ 00:22:05.893 { 00:22:05.893 "method": "iscsi_set_options", 00:22:05.893 "params": { 00:22:05.893 "node_base": "iqn.2016-06.io.spdk", 00:22:05.893 "max_sessions": 128, 00:22:05.893 "max_connections_per_session": 2, 00:22:05.893 "max_queue_depth": 64, 00:22:05.893 "default_time2wait": 2, 00:22:05.893 "default_time2retain": 20, 00:22:05.893 "first_burst_length": 8192, 00:22:05.893 "immediate_data": true, 00:22:05.893 "allow_duplicated_isid": false, 00:22:05.893 "error_recovery_level": 0, 00:22:05.893 "nop_timeout": 60, 00:22:05.893 "nop_in_interval": 30, 00:22:05.893 "disable_chap": false, 00:22:05.893 "require_chap": false, 00:22:05.893 "mutual_chap": false, 00:22:05.893 "chap_group": 0, 00:22:05.893 "max_large_datain_per_connection": 64, 00:22:05.893 "max_r2t_per_connection": 4, 00:22:05.893 "pdu_pool_size": 36864, 00:22:05.893 "immediate_data_pool_size": 16384, 00:22:05.893 "data_out_pool_size": 2048 00:22:05.893 } 00:22:05.893 } 00:22:05.893 ] 00:22:05.893 } 00:22:05.893 ] 00:22:05.893 }' 00:22:05.893 17:22:51 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 76340 00:22:05.893 17:22:51 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 76340 ']' 00:22:05.893 17:22:51 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 76340 00:22:05.893 17:22:51 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:22:05.893 17:22:51 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:05.893 17:22:51 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76340 00:22:05.893 17:22:51 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:05.893 17:22:51 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:05.893 killing process with pid 76340 00:22:05.893 17:22:51 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76340' 00:22:05.893 17:22:51 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 76340 00:22:05.893 17:22:51 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 76340 00:22:07.269 [2024-07-24 17:22:53.349122] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:07.269 [2024-07-24 17:22:53.390720] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:07.269 [2024-07-24 17:22:53.390975] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:07.269 [2024-07-24 17:22:53.398786] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:07.269 [2024-07-24 17:22:53.398887] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:07.269 [2024-07-24 17:22:53.398905] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:07.269 [2024-07-24 17:22:53.398946] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:22:07.269 [2024-07-24 17:22:53.399140] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:22:08.652 17:22:54 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=76406 00:22:08.652 17:22:54 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:22:08.652 17:22:54 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 76406 00:22:08.652 17:22:54 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 76406 ']' 00:22:08.652 17:22:54 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.652 17:22:54 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:22:08.652 "subsystems": [ 00:22:08.652 { 00:22:08.652 "subsystem": "keyring", 00:22:08.652 "config": [] 00:22:08.652 }, 00:22:08.652 { 00:22:08.652 "subsystem": "iobuf", 00:22:08.652 "config": [ 00:22:08.652 { 00:22:08.652 "method": "iobuf_set_options", 00:22:08.652 "params": { 00:22:08.652 "small_pool_count": 8192, 00:22:08.652 "large_pool_count": 1024, 00:22:08.652 "small_bufsize": 8192, 00:22:08.652 "large_bufsize": 135168 00:22:08.652 } 00:22:08.652 } 00:22:08.652 ] 00:22:08.652 }, 00:22:08.652 { 00:22:08.652 "subsystem": "sock", 00:22:08.652 "config": [ 00:22:08.652 { 00:22:08.652 "method": "sock_set_default_impl", 00:22:08.652 "params": { 00:22:08.652 "impl_name": "posix" 00:22:08.652 } 00:22:08.652 }, 00:22:08.652 { 00:22:08.652 "method": "sock_impl_set_options", 00:22:08.652 "params": { 00:22:08.652 "impl_name": "ssl", 00:22:08.652 "recv_buf_size": 4096, 00:22:08.652 "send_buf_size": 4096, 00:22:08.652 "enable_recv_pipe": true, 00:22:08.652 "enable_quickack": false, 00:22:08.652 "enable_placement_id": 0, 00:22:08.652 "enable_zerocopy_send_server": true, 00:22:08.652 "enable_zerocopy_send_client": false, 00:22:08.652 "zerocopy_threshold": 0, 00:22:08.652 "tls_version": 0, 00:22:08.652 "enable_ktls": false 00:22:08.652 } 00:22:08.652 }, 00:22:08.652 { 00:22:08.652 "method": "sock_impl_set_options", 00:22:08.652 "params": { 00:22:08.652 "impl_name": "posix", 00:22:08.652 "recv_buf_size": 2097152, 00:22:08.652 "send_buf_size": 2097152, 00:22:08.652 "enable_recv_pipe": true, 00:22:08.652 "enable_quickack": false, 00:22:08.652 "enable_placement_id": 0, 00:22:08.652 "enable_zerocopy_send_server": true, 00:22:08.652 "enable_zerocopy_send_client": false, 00:22:08.652 "zerocopy_threshold": 0, 00:22:08.652 "tls_version": 0, 00:22:08.652 "enable_ktls": false 00:22:08.652 } 00:22:08.652 } 00:22:08.652 ] 00:22:08.652 }, 00:22:08.652 { 00:22:08.652 "subsystem": "vmd", 00:22:08.652 "config": [] 00:22:08.652 }, 00:22:08.652 { 00:22:08.652 "subsystem": "accel", 00:22:08.652 "config": [ 00:22:08.652 { 00:22:08.652 "method": "accel_set_options", 00:22:08.652 "params": { 00:22:08.652 "small_cache_size": 128, 00:22:08.652 "large_cache_size": 16, 00:22:08.652 "task_count": 2048, 00:22:08.652 "sequence_count": 2048, 00:22:08.652 "buf_count": 2048 00:22:08.652 } 00:22:08.652 } 00:22:08.652 ] 00:22:08.652 }, 00:22:08.652 { 00:22:08.652 "subsystem": "bdev", 00:22:08.652 "config": [ 00:22:08.652 { 00:22:08.652 "method": "bdev_set_options", 00:22:08.652 "params": { 00:22:08.652 "bdev_io_pool_size": 65535, 00:22:08.652 "bdev_io_cache_size": 256, 00:22:08.652 "bdev_auto_examine": true, 00:22:08.652 "iobuf_small_cache_size": 128, 00:22:08.652 "iobuf_large_cache_size": 16 00:22:08.652 } 00:22:08.652 }, 00:22:08.652 { 00:22:08.652 "method": "bdev_raid_set_options", 00:22:08.652 "params": { 00:22:08.652 "process_window_size_kb": 1024, 00:22:08.652 "process_max_bandwidth_mb_sec": 0 00:22:08.653 } 00:22:08.653 }, 00:22:08.653 { 00:22:08.653 "method": "bdev_iscsi_set_options", 00:22:08.653 "params": { 00:22:08.653 "timeout_sec": 30 00:22:08.653 } 00:22:08.653 }, 00:22:08.653 { 00:22:08.653 "method": "bdev_nvme_set_options", 00:22:08.653 "params": { 00:22:08.653 "action_on_timeout": "none", 00:22:08.653 "timeout_us": 0, 00:22:08.653 "timeout_admin_us": 0, 00:22:08.653 "keep_alive_timeout_ms": 10000, 00:22:08.653 "arbitration_burst": 0, 00:22:08.653 "low_priority_weight": 0, 00:22:08.653 "medium_priority_weight": 0, 00:22:08.653 "high_priority_weight": 0, 00:22:08.653 "nvme_adminq_poll_period_us": 10000, 00:22:08.653 "nvme_ioq_poll_period_us": 0, 00:22:08.653 "io_queue_requests": 0, 00:22:08.653 "delay_cmd_submit": true, 00:22:08.653 "transport_retry_count": 4, 00:22:08.653 "bdev_retry_count": 3, 00:22:08.653 "transport_ack_timeout": 0, 00:22:08.653 "ctrlr_loss_timeout_sec": 0, 00:22:08.653 "reconnect_delay_sec": 0, 00:22:08.653 "fast_io_fail_timeout_sec": 0, 00:22:08.653 "disable_auto_failback": false, 00:22:08.653 "generate_uuids": false, 00:22:08.653 "transport_tos": 0, 00:22:08.653 "nvme_error_stat": false, 00:22:08.653 "rdma_srq_size": 0, 00:22:08.653 "io_path_stat": false, 00:22:08.653 "allow_accel_sequence": false, 00:22:08.653 "rdma_max_cq_size": 0, 00:22:08.653 "rdma_cm_event_timeout_ms": 0, 00:22:08.653 "dhchap_digests": [ 00:22:08.653 "sha256", 00:22:08.653 "sha384", 00:22:08.653 "sha512" 00:22:08.653 ], 00:22:08.653 "dhchap_dhgroups": [ 00:22:08.653 "null", 00:22:08.653 "ffdhe2048", 00:22:08.653 "ffdhe3072", 00:22:08.653 "ffdhe4096", 00:22:08.653 "ffdhe6144", 00:22:08.653 "ffdhe8192" 00:22:08.653 ] 00:22:08.653 } 00:22:08.653 }, 00:22:08.653 { 00:22:08.653 "method": "bdev_nvme_set_hotplug", 00:22:08.653 "params": { 00:22:08.653 "period_us": 100000, 00:22:08.653 "enable": false 00:22:08.653 } 00:22:08.653 }, 00:22:08.653 { 00:22:08.653 "method": "bdev_malloc_create", 00:22:08.653 "params": { 00:22:08.653 "name": "malloc0", 00:22:08.653 "num_blocks": 8192, 00:22:08.653 "block_size": 4096, 00:22:08.653 "physical_block_size": 4096, 00:22:08.653 "uuid": "5a5307d7-a36a-428d-8fe6-bb7630321424", 00:22:08.653 "optimal_io_boundary": 0, 00:22:08.653 "md_size": 0, 00:22:08.653 "dif_type": 0, 00:22:08.653 "dif_is_head_of_md": false, 00:22:08.653 "dif_pi_format": 0 00:22:08.653 } 00:22:08.653 }, 00:22:08.653 { 00:22:08.653 "method": "bdev_wait_for_examine" 00:22:08.653 } 00:22:08.653 ] 00:22:08.653 }, 00:22:08.653 { 00:22:08.653 "subsystem": "scsi", 00:22:08.653 "config": null 00:22:08.653 }, 00:22:08.653 { 00:22:08.653 "subsystem": "scheduler", 00:22:08.653 "config": [ 00:22:08.653 { 00:22:08.653 "method": "framework_set_scheduler", 00:22:08.653 "params": { 00:22:08.653 "name": "static" 00:22:08.653 } 00:22:08.653 } 00:22:08.653 ] 00:22:08.653 }, 00:22:08.653 { 00:22:08.653 "subsystem": "vhost_scsi", 00:22:08.653 "config": [] 00:22:08.653 }, 00:22:08.653 { 00:22:08.653 "subsystem": "vhost_blk", 00:22:08.653 "config": [] 00:22:08.653 }, 00:22:08.653 { 00:22:08.653 "subsystem": "ublk", 00:22:08.653 "config": [ 00:22:08.653 { 00:22:08.653 "method": "ublk_create_target", 00:22:08.653 "params": { 00:22:08.653 "cpumask": "1" 00:22:08.653 } 00:22:08.653 }, 00:22:08.653 { 00:22:08.653 "method": "ublk_start_disk", 00:22:08.653 "params": { 00:22:08.653 "bdev_name": "malloc0", 00:22:08.653 "ublk_id": 0, 00:22:08.653 "num_queues": 1, 00:22:08.653 "queue_depth": 128 00:22:08.653 } 00:22:08.653 } 00:22:08.653 ] 00:22:08.653 }, 00:22:08.653 { 00:22:08.653 "subsystem": "nbd", 00:22:08.653 "config": [] 00:22:08.653 }, 00:22:08.653 { 00:22:08.653 "subsystem": "nvmf", 00:22:08.653 "config": [ 00:22:08.653 { 00:22:08.653 "method": "nvmf_set_config", 00:22:08.653 "params": { 00:22:08.653 "discovery_filter": "match_any", 00:22:08.653 "admin_cmd_passthru": { 00:22:08.653 "identify_ctrlr": false 00:22:08.653 } 00:22:08.653 } 00:22:08.653 }, 00:22:08.653 { 00:22:08.653 "method": "nvmf_set_max_subsystems", 00:22:08.653 "params": { 00:22:08.653 "max_subsystems": 1024 00:22:08.653 } 00:22:08.653 }, 00:22:08.653 { 00:22:08.653 "method": "nvmf_set_crdt", 00:22:08.653 "params": { 00:22:08.653 "crdt1": 0, 00:22:08.653 "crdt2": 0, 00:22:08.653 "crdt3": 0 00:22:08.653 } 00:22:08.653 } 00:22:08.653 ] 00:22:08.653 }, 00:22:08.653 { 00:22:08.653 "subsystem": "iscsi", 00:22:08.653 "config": [ 00:22:08.653 { 00:22:08.653 "method": "iscsi_set_options", 00:22:08.653 "params": { 00:22:08.653 "node_base": "iqn.2016-06.io.spdk", 00:22:08.653 "max_sessions": 128, 00:22:08.653 "max_connections_per_session": 2, 00:22:08.653 "max_queue_depth": 64, 00:22:08.653 "default_time2wait": 2, 00:22:08.653 "default_time2retain": 20, 00:22:08.653 "first_burst_length": 8192, 00:22:08.653 "immediate_data": true, 00:22:08.653 "allow_duplicated_isid": false, 00:22:08.653 "error_recovery_level": 0, 00:22:08.653 "nop_timeout": 60, 00:22:08.653 "nop_in_interval": 30, 00:22:08.653 "disable_chap": false, 00:22:08.653 "require_chap": false, 00:22:08.653 "mutual_chap": false, 00:22:08.653 "chap_group": 0, 00:22:08.653 "max_large_datain_per_connection": 64, 00:22:08.653 "max_r2t_per_connection": 4, 00:22:08.653 "pdu_pool_size": 36864, 00:22:08.653 "immediate_data_pool_size": 16384, 00:22:08.653 "data_out_pool_size": 2048 00:22:08.653 } 00:22:08.653 } 00:22:08.653 ] 00:22:08.653 } 00:22:08.653 ] 00:22:08.653 }' 00:22:08.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.653 17:22:54 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:08.653 17:22:54 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.653 17:22:54 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:08.653 17:22:54 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:08.653 [2024-07-24 17:22:54.863860] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:22:08.653 [2024-07-24 17:22:54.864520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76406 ] 00:22:08.911 [2024-07-24 17:22:55.035398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.169 [2024-07-24 17:22:55.250425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.102 [2024-07-24 17:22:56.130666] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:10.102 [2024-07-24 17:22:56.131829] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:10.102 [2024-07-24 17:22:56.137819] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:22:10.102 [2024-07-24 17:22:56.137926] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:22:10.102 [2024-07-24 17:22:56.137940] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:10.102 [2024-07-24 17:22:56.137949] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:10.102 [2024-07-24 17:22:56.145812] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:10.102 [2024-07-24 17:22:56.145856] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:10.102 [2024-07-24 17:22:56.153709] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:10.102 [2024-07-24 17:22:56.153859] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:10.102 [2024-07-24 17:22:56.169743] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:10.102 17:22:56 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:10.102 17:22:56 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:22:10.102 17:22:56 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:22:10.102 17:22:56 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.102 17:22:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:10.102 17:22:56 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:22:10.102 17:22:56 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.102 17:22:56 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:10.102 17:22:56 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:22:10.102 17:22:56 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 76406 00:22:10.102 17:22:56 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 76406 ']' 00:22:10.102 17:22:56 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 76406 00:22:10.102 17:22:56 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:22:10.102 17:22:56 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:10.102 17:22:56 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76406 00:22:10.102 17:22:56 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:10.103 17:22:56 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:10.103 killing process with pid 76406 00:22:10.103 17:22:56 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76406' 00:22:10.103 17:22:56 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 76406 00:22:10.103 17:22:56 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 76406 00:22:11.477 [2024-07-24 17:22:57.677344] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:11.735 [2024-07-24 17:22:57.717860] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:11.735 [2024-07-24 17:22:57.721721] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:11.735 [2024-07-24 17:22:57.726842] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:11.735 [2024-07-24 17:22:57.726930] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:11.735 [2024-07-24 17:22:57.726946] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:11.735 [2024-07-24 17:22:57.726985] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:22:11.735 [2024-07-24 17:22:57.727190] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:22:13.114 17:22:59 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:22:13.114 00:22:13.114 real 0m8.807s 00:22:13.114 user 0m7.412s 00:22:13.114 sys 0m2.227s 00:22:13.114 17:22:59 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:13.114 17:22:59 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:13.114 ************************************ 00:22:13.114 END TEST test_save_ublk_config 00:22:13.114 ************************************ 00:22:13.114 17:22:59 ublk -- ublk/ublk.sh@139 -- # spdk_pid=76485 00:22:13.114 17:22:59 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:13.114 17:22:59 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:13.114 17:22:59 ublk -- ublk/ublk.sh@141 -- # waitforlisten 76485 00:22:13.114 17:22:59 ublk -- common/autotest_common.sh@831 -- # '[' -z 76485 ']' 00:22:13.114 17:22:59 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.114 17:22:59 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:13.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.115 17:22:59 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.115 17:22:59 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:13.115 17:22:59 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:13.115 [2024-07-24 17:22:59.197788] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:22:13.115 [2024-07-24 17:22:59.197975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76485 ] 00:22:13.373 [2024-07-24 17:22:59.373756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:13.373 [2024-07-24 17:22:59.568094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.373 [2024-07-24 17:22:59.568103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.307 17:23:00 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:14.307 17:23:00 ublk -- common/autotest_common.sh@864 -- # return 0 00:22:14.307 17:23:00 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:22:14.307 17:23:00 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:14.307 17:23:00 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:14.307 17:23:00 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.307 ************************************ 00:22:14.307 START TEST test_create_ublk 00:22:14.307 ************************************ 00:22:14.307 17:23:00 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:22:14.307 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:22:14.307 17:23:00 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.307 17:23:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.307 [2024-07-24 17:23:00.367776] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:14.307 [2024-07-24 17:23:00.370850] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:14.307 17:23:00 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.307 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:22:14.307 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:22:14.307 17:23:00 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.307 17:23:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.566 17:23:00 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.566 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:22:14.566 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:22:14.566 17:23:00 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.566 17:23:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.566 [2024-07-24 17:23:00.657890] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:22:14.566 [2024-07-24 17:23:00.658498] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:22:14.566 [2024-07-24 17:23:00.658524] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:14.566 [2024-07-24 17:23:00.658539] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:14.566 [2024-07-24 17:23:00.665257] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:14.566 [2024-07-24 17:23:00.665310] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:14.566 [2024-07-24 17:23:00.672717] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:14.566 [2024-07-24 17:23:00.681945] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:14.566 [2024-07-24 17:23:00.695874] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:14.566 17:23:00 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.566 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:22:14.566 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:22:14.566 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:22:14.566 17:23:00 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.566 17:23:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.566 17:23:00 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.566 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:22:14.566 { 00:22:14.566 "ublk_device": "/dev/ublkb0", 00:22:14.566 "id": 0, 00:22:14.566 "queue_depth": 512, 00:22:14.566 "num_queues": 4, 00:22:14.566 "bdev_name": "Malloc0" 00:22:14.566 } 00:22:14.566 ]' 00:22:14.566 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:22:14.566 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:14.566 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:22:14.824 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:22:14.824 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:22:14.824 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:22:14.824 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:22:14.824 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:22:14.824 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:22:14.824 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:22:14.824 17:23:00 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:22:14.824 17:23:00 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:22:14.824 17:23:00 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:22:14.824 17:23:00 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:22:14.824 17:23:00 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:22:14.824 17:23:00 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:22:14.824 17:23:00 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:22:14.824 17:23:00 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:22:14.824 17:23:00 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:22:14.825 17:23:00 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:22:14.825 17:23:00 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:22:14.825 17:23:00 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:22:15.083 fio: verification read phase will never start because write phase uses all of runtime 00:22:15.083 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:22:15.083 fio-3.35 00:22:15.083 Starting 1 process 00:22:25.055 00:22:25.055 fio_test: (groupid=0, jobs=1): err= 0: pid=76531: Wed Jul 24 17:23:11 2024 00:22:25.055 write: IOPS=9967, BW=38.9MiB/s (40.8MB/s)(389MiB/10001msec); 0 zone resets 00:22:25.055 clat (usec): min=55, max=8166, avg=98.98, stdev=164.31 00:22:25.055 lat (usec): min=56, max=8168, avg=99.69, stdev=164.34 00:22:25.055 clat percentiles (usec): 00:22:25.055 | 1.00th=[ 74], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 79], 00:22:25.055 | 30.00th=[ 81], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 89], 00:22:25.055 | 70.00th=[ 94], 80.00th=[ 99], 90.00th=[ 110], 95.00th=[ 119], 00:22:25.055 | 99.00th=[ 139], 99.50th=[ 159], 99.90th=[ 3326], 99.95th=[ 3654], 00:22:25.055 | 99.99th=[ 4047] 00:22:25.055 bw ( KiB/s): min=16232, max=41984, per=99.78%, avg=39781.68, stdev=5794.96, samples=19 00:22:25.055 iops : min= 4058, max=10496, avg=9945.37, stdev=1448.73, samples=19 00:22:25.055 lat (usec) : 100=81.04%, 250=18.53%, 500=0.01%, 750=0.02%, 1000=0.03% 00:22:25.055 lat (msec) : 2=0.12%, 4=0.23%, 10=0.01% 00:22:25.055 cpu : usr=2.68%, sys=6.90%, ctx=99684, majf=0, minf=797 00:22:25.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:25.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:25.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:25.055 issued rwts: total=0,99684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:25.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:25.055 00:22:25.055 Run status group 0 (all jobs): 00:22:25.055 WRITE: bw=38.9MiB/s (40.8MB/s), 38.9MiB/s-38.9MiB/s (40.8MB/s-40.8MB/s), io=389MiB (408MB), run=10001-10001msec 00:22:25.055 00:22:25.055 Disk stats (read/write): 00:22:25.055 ublkb0: ios=0/98664, merge=0/0, ticks=0/8977, in_queue=8977, util=99.12% 00:22:25.055 17:23:11 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:22:25.055 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.055 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:25.055 [2024-07-24 17:23:11.218571] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:25.055 [2024-07-24 17:23:11.262715] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:25.055 [2024-07-24 17:23:11.263993] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:25.055 [2024-07-24 17:23:11.270978] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:25.055 [2024-07-24 17:23:11.271347] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:25.055 [2024-07-24 17:23:11.271364] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:25.055 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.055 17:23:11 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:22:25.055 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:22:25.055 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:22:25.055 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:25.056 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:25.056 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:25.056 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:25.056 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:22:25.056 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.056 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:25.056 [2024-07-24 17:23:11.277851] ublk.c:1053:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:22:25.056 request: 00:22:25.056 { 00:22:25.056 "ublk_id": 0, 00:22:25.056 "method": "ublk_stop_disk", 00:22:25.056 "req_id": 1 00:22:25.056 } 00:22:25.056 Got JSON-RPC error response 00:22:25.056 response: 00:22:25.056 { 00:22:25.056 "code": -19, 00:22:25.056 "message": "No such device" 00:22:25.056 } 00:22:25.056 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:25.056 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:22:25.056 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:25.056 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:25.056 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:25.056 17:23:11 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:22:25.056 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.056 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:25.314 [2024-07-24 17:23:11.293855] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:22:25.314 [2024-07-24 17:23:11.300018] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:22:25.314 [2024-07-24 17:23:11.300079] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:25.314 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.314 17:23:11 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:25.314 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.314 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:25.579 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.579 17:23:11 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:22:25.579 17:23:11 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:25.579 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.579 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:25.579 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.579 17:23:11 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:22:25.579 17:23:11 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:22:25.579 17:23:11 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:22:25.579 17:23:11 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:22:25.579 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.579 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:25.579 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.579 17:23:11 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:22:25.579 17:23:11 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:22:25.580 ************************************ 00:22:25.580 END TEST test_create_ublk 00:22:25.580 ************************************ 00:22:25.580 17:23:11 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:22:25.580 00:22:25.580 real 0m11.401s 00:22:25.580 user 0m0.709s 00:22:25.580 sys 0m0.793s 00:22:25.580 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:25.580 17:23:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:25.580 17:23:11 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:22:25.580 17:23:11 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:25.580 17:23:11 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:25.580 17:23:11 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:25.580 ************************************ 00:22:25.580 START TEST test_create_multi_ublk 00:22:25.580 ************************************ 00:22:25.580 17:23:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:22:25.858 17:23:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:22:25.858 17:23:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.858 17:23:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:25.858 [2024-07-24 17:23:11.820719] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:25.858 [2024-07-24 17:23:11.823582] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:25.858 17:23:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.858 17:23:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:22:25.858 17:23:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:22:25.858 17:23:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:25.858 17:23:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:22:25.858 17:23:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.858 17:23:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:25.858 17:23:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.858 17:23:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:22:25.858 17:23:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:22:25.858 17:23:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.858 17:23:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:26.116 [2024-07-24 17:23:12.097891] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:22:26.116 [2024-07-24 17:23:12.098420] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:22:26.116 [2024-07-24 17:23:12.098439] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:26.116 [2024-07-24 17:23:12.098449] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:26.116 [2024-07-24 17:23:12.106200] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:26.116 [2024-07-24 17:23:12.106231] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:26.116 [2024-07-24 17:23:12.112762] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:26.116 [2024-07-24 17:23:12.113587] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:26.116 [2024-07-24 17:23:12.136694] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:26.116 17:23:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.116 17:23:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:22:26.116 17:23:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:26.116 17:23:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:22:26.116 17:23:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.116 17:23:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:26.375 17:23:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.375 17:23:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:22:26.375 17:23:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:22:26.375 17:23:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.375 17:23:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:26.375 [2024-07-24 17:23:12.419947] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:22:26.375 [2024-07-24 17:23:12.420549] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:22:26.375 [2024-07-24 17:23:12.420573] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:26.375 [2024-07-24 17:23:12.420587] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:22:26.375 [2024-07-24 17:23:12.427707] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:26.375 [2024-07-24 17:23:12.427760] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:26.375 [2024-07-24 17:23:12.435719] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:26.375 [2024-07-24 17:23:12.436532] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:22:26.375 [2024-07-24 17:23:12.444753] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:22:26.375 17:23:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.375 17:23:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:22:26.375 17:23:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:26.375 17:23:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:22:26.375 17:23:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.375 17:23:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:26.633 17:23:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.633 17:23:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:22:26.633 17:23:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:22:26.633 17:23:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.633 17:23:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:26.633 [2024-07-24 17:23:12.723859] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:22:26.633 [2024-07-24 17:23:12.724371] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:22:26.633 [2024-07-24 17:23:12.724402] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:22:26.633 [2024-07-24 17:23:12.724413] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:22:26.633 [2024-07-24 17:23:12.731711] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:26.633 [2024-07-24 17:23:12.731743] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:26.633 [2024-07-24 17:23:12.742788] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:26.633 [2024-07-24 17:23:12.743679] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:22:26.633 [2024-07-24 17:23:12.766701] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:22:26.633 17:23:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.633 17:23:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:22:26.633 17:23:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:26.633 17:23:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:22:26.634 17:23:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.634 17:23:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:26.892 17:23:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.892 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:22:26.892 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:22:26.892 17:23:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.892 17:23:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:26.892 [2024-07-24 17:23:13.050828] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:22:26.892 [2024-07-24 17:23:13.051364] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:22:26.892 [2024-07-24 17:23:13.051381] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:22:26.892 [2024-07-24 17:23:13.051393] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:22:26.892 [2024-07-24 17:23:13.061681] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:26.892 [2024-07-24 17:23:13.061867] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:26.892 [2024-07-24 17:23:13.069706] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:26.892 [2024-07-24 17:23:13.070610] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:22:26.892 [2024-07-24 17:23:13.090680] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:22:26.892 17:23:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.892 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:22:26.892 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:22:26.892 17:23:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.892 17:23:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:26.892 17:23:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.892 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:22:26.892 { 00:22:26.892 "ublk_device": "/dev/ublkb0", 00:22:26.892 "id": 0, 00:22:26.892 "queue_depth": 512, 00:22:26.892 "num_queues": 4, 00:22:26.892 "bdev_name": "Malloc0" 00:22:26.892 }, 00:22:26.892 { 00:22:26.892 "ublk_device": "/dev/ublkb1", 00:22:26.892 "id": 1, 00:22:26.892 "queue_depth": 512, 00:22:26.892 "num_queues": 4, 00:22:26.892 "bdev_name": "Malloc1" 00:22:26.892 }, 00:22:26.892 { 00:22:26.892 "ublk_device": "/dev/ublkb2", 00:22:26.892 "id": 2, 00:22:26.892 "queue_depth": 512, 00:22:26.892 "num_queues": 4, 00:22:26.892 "bdev_name": "Malloc2" 00:22:26.892 }, 00:22:26.892 { 00:22:26.892 "ublk_device": "/dev/ublkb3", 00:22:26.892 "id": 3, 00:22:26.892 "queue_depth": 512, 00:22:26.892 "num_queues": 4, 00:22:26.892 "bdev_name": "Malloc3" 00:22:26.892 } 00:22:26.892 ]' 00:22:26.892 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:22:26.892 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:26.892 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:22:27.151 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:27.151 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:22:27.151 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:22:27.151 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:22:27.151 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:27.151 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:22:27.151 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:27.151 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:22:27.151 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:22:27.151 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:27.151 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:22:27.409 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:22:27.409 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:22:27.409 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:22:27.409 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:22:27.409 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:27.409 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:22:27.409 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:27.409 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:22:27.409 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:22:27.409 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:27.409 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:22:27.667 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:22:27.667 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:22:27.667 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:22:27.667 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:22:27.667 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:27.667 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:22:27.667 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:27.667 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:22:27.667 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:22:27.667 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:27.667 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:22:27.667 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:22:27.924 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:22:27.924 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:22:27.924 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:22:27.924 17:23:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:27.924 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:22:27.924 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:27.924 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:22:27.925 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:22:27.925 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:22:27.925 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:22:27.925 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:27.925 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:22:27.925 17:23:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.925 17:23:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:27.925 [2024-07-24 17:23:14.113954] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:28.183 [2024-07-24 17:23:14.166772] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:28.183 [2024-07-24 17:23:14.172136] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:28.183 [2024-07-24 17:23:14.179701] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:28.183 [2024-07-24 17:23:14.180046] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:28.183 [2024-07-24 17:23:14.180067] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:28.183 17:23:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.183 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:28.183 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:22:28.183 17:23:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.183 17:23:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.183 [2024-07-24 17:23:14.189821] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:22:28.183 [2024-07-24 17:23:14.232763] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:28.183 [2024-07-24 17:23:14.234110] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:22:28.183 [2024-07-24 17:23:14.240734] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:28.183 [2024-07-24 17:23:14.241085] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:22:28.183 [2024-07-24 17:23:14.241104] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:22:28.183 17:23:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.183 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:28.183 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:22:28.183 17:23:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.183 17:23:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.183 [2024-07-24 17:23:14.255901] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:22:28.183 [2024-07-24 17:23:14.291823] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:28.183 [2024-07-24 17:23:14.293172] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:22:28.183 [2024-07-24 17:23:14.301807] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:28.183 [2024-07-24 17:23:14.302159] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:22:28.183 [2024-07-24 17:23:14.302180] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:22:28.183 17:23:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.183 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:28.183 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:22:28.183 17:23:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.183 17:23:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.183 [2024-07-24 17:23:14.310865] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:22:28.183 [2024-07-24 17:23:14.351837] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:28.183 [2024-07-24 17:23:14.353098] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:22:28.183 [2024-07-24 17:23:14.359715] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:28.183 [2024-07-24 17:23:14.360033] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:22:28.183 [2024-07-24 17:23:14.360055] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:22:28.183 17:23:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.183 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:22:28.441 [2024-07-24 17:23:14.633796] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:22:28.441 [2024-07-24 17:23:14.639805] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:22:28.441 [2024-07-24 17:23:14.639861] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:28.441 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:22:28.441 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:28.441 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:28.441 17:23:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.441 17:23:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:29.007 17:23:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.007 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:29.007 17:23:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:29.007 17:23:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.007 17:23:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:29.265 17:23:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.265 17:23:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:29.265 17:23:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:22:29.265 17:23:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.265 17:23:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:29.523 17:23:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.523 17:23:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:29.523 17:23:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:22:29.523 17:23:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.523 17:23:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:29.781 17:23:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.781 17:23:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:22:29.781 17:23:15 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:29.781 17:23:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.781 17:23:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:29.781 17:23:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.781 17:23:15 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:22:29.781 17:23:15 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:22:29.781 17:23:15 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:22:29.781 17:23:15 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:22:29.781 17:23:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.781 17:23:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:29.781 17:23:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.781 17:23:15 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:22:29.781 17:23:15 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:22:30.039 ************************************ 00:22:30.039 END TEST test_create_multi_ublk 00:22:30.039 ************************************ 00:22:30.039 17:23:16 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:22:30.039 00:22:30.039 real 0m4.229s 00:22:30.039 user 0m1.293s 00:22:30.039 sys 0m0.161s 00:22:30.039 17:23:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:30.039 17:23:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:30.039 17:23:16 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:30.039 17:23:16 ublk -- ublk/ublk.sh@147 -- # cleanup 00:22:30.039 17:23:16 ublk -- ublk/ublk.sh@130 -- # killprocess 76485 00:22:30.039 17:23:16 ublk -- common/autotest_common.sh@950 -- # '[' -z 76485 ']' 00:22:30.039 17:23:16 ublk -- common/autotest_common.sh@954 -- # kill -0 76485 00:22:30.039 17:23:16 ublk -- common/autotest_common.sh@955 -- # uname 00:22:30.039 17:23:16 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:30.039 17:23:16 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76485 00:22:30.039 killing process with pid 76485 00:22:30.039 17:23:16 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:30.039 17:23:16 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:30.039 17:23:16 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76485' 00:22:30.039 17:23:16 ublk -- common/autotest_common.sh@969 -- # kill 76485 00:22:30.039 17:23:16 ublk -- common/autotest_common.sh@974 -- # wait 76485 00:22:30.974 [2024-07-24 17:23:17.039776] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:22:30.974 [2024-07-24 17:23:17.039845] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:22:32.359 00:22:32.359 real 0m28.102s 00:22:32.359 user 0m42.026s 00:22:32.359 sys 0m8.376s 00:22:32.359 17:23:18 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:32.359 17:23:18 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:32.359 ************************************ 00:22:32.359 END TEST ublk 00:22:32.359 ************************************ 00:22:32.359 17:23:18 -- spdk/autotest.sh@256 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:22:32.359 17:23:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:32.359 17:23:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:32.359 17:23:18 -- common/autotest_common.sh@10 -- # set +x 00:22:32.359 ************************************ 00:22:32.359 START TEST ublk_recovery 00:22:32.359 ************************************ 00:22:32.359 17:23:18 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:22:32.359 * Looking for test storage... 00:22:32.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:22:32.359 17:23:18 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:22:32.359 17:23:18 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:22:32.359 17:23:18 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:22:32.359 17:23:18 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:22:32.359 17:23:18 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:22:32.359 17:23:18 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:22:32.359 17:23:18 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:22:32.359 17:23:18 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:22:32.359 17:23:18 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:22:32.359 17:23:18 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:22:32.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.359 17:23:18 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76874 00:22:32.359 17:23:18 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:32.359 17:23:18 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:32.359 17:23:18 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76874 00:22:32.359 17:23:18 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 76874 ']' 00:22:32.359 17:23:18 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.359 17:23:18 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:32.359 17:23:18 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.359 17:23:18 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:32.359 17:23:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.359 [2024-07-24 17:23:18.486236] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:22:32.359 [2024-07-24 17:23:18.486444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76874 ] 00:22:32.618 [2024-07-24 17:23:18.666466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:32.876 [2024-07-24 17:23:18.964961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.876 [2024-07-24 17:23:18.965028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.810 17:23:19 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:33.810 17:23:19 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:22:33.810 17:23:19 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:22:33.810 17:23:19 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.810 17:23:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.810 [2024-07-24 17:23:19.955716] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:33.810 [2024-07-24 17:23:19.958672] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:33.810 17:23:19 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.810 17:23:19 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:22:33.810 17:23:19 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.810 17:23:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.068 malloc0 00:22:34.068 17:23:20 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.068 17:23:20 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:22:34.068 17:23:20 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.068 17:23:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.068 [2024-07-24 17:23:20.115939] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:22:34.068 [2024-07-24 17:23:20.116111] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:22:34.068 [2024-07-24 17:23:20.116127] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:34.068 [2024-07-24 17:23:20.116139] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:22:34.068 [2024-07-24 17:23:20.123705] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:34.068 [2024-07-24 17:23:20.123744] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:34.068 [2024-07-24 17:23:20.130726] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:34.068 [2024-07-24 17:23:20.130941] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:22:34.068 [2024-07-24 17:23:20.157701] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:22:34.068 1 00:22:34.068 17:23:20 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.068 17:23:20 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:22:35.002 17:23:21 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76915 00:22:35.002 17:23:21 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:22:35.002 17:23:21 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:22:35.261 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:35.261 fio-3.35 00:22:35.261 Starting 1 process 00:22:40.526 17:23:26 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76874 00:22:40.526 17:23:26 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:22:45.794 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76874 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:22:45.794 17:23:31 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=77024 00:22:45.794 17:23:31 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:45.794 17:23:31 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:45.794 17:23:31 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 77024 00:22:45.794 17:23:31 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 77024 ']' 00:22:45.794 17:23:31 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.794 17:23:31 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:45.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.794 17:23:31 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.794 17:23:31 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:45.794 17:23:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.794 [2024-07-24 17:23:31.305877] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:22:45.794 [2024-07-24 17:23:31.306094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77024 ] 00:22:45.794 [2024-07-24 17:23:31.488291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:45.794 [2024-07-24 17:23:31.762038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.794 [2024-07-24 17:23:31.762049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.360 17:23:32 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:46.360 17:23:32 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:22:46.360 17:23:32 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:22:46.360 17:23:32 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.360 17:23:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:46.360 [2024-07-24 17:23:32.558740] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:46.360 [2024-07-24 17:23:32.561836] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:46.360 17:23:32 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.360 17:23:32 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:22:46.360 17:23:32 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.360 17:23:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:46.616 malloc0 00:22:46.616 17:23:32 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.617 17:23:32 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:22:46.617 17:23:32 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.617 17:23:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:46.617 [2024-07-24 17:23:32.723849] ublk.c:2077:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:22:46.617 [2024-07-24 17:23:32.723911] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:46.617 [2024-07-24 17:23:32.723925] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:46.617 [2024-07-24 17:23:32.733739] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:46.617 [2024-07-24 17:23:32.733780] ublk.c:2006:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:22:46.617 1 00:22:46.617 [2024-07-24 17:23:32.733913] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:22:46.617 17:23:32 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.617 17:23:32 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76915 00:23:13.145 [2024-07-24 17:23:56.544706] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:23:13.145 [2024-07-24 17:23:56.551022] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:23:13.145 [2024-07-24 17:23:56.559789] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:23:13.145 [2024-07-24 17:23:56.559829] ublk.c: 379:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:23:39.688 00:23:39.688 fio_test: (groupid=0, jobs=1): err= 0: pid=76921: Wed Jul 24 17:24:21 2024 00:23:39.688 read: IOPS=10.2k, BW=39.9MiB/s (41.9MB/s)(2395MiB/60002msec) 00:23:39.689 slat (nsec): min=1770, max=1209.3k, avg=6277.26, stdev=4641.47 00:23:39.689 clat (usec): min=912, max=30399k, avg=5809.54, stdev=290460.25 00:23:39.689 lat (usec): min=918, max=30399k, avg=5815.82, stdev=290460.28 00:23:39.689 clat percentiles (usec): 00:23:39.689 | 1.00th=[ 2474], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2769], 00:23:39.689 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2966], 00:23:39.689 | 70.00th=[ 2999], 80.00th=[ 3064], 90.00th=[ 3228], 95.00th=[ 4080], 00:23:39.689 | 99.00th=[ 6194], 99.50th=[ 6652], 99.90th=[ 8455], 99.95th=[ 9110], 00:23:39.689 | 99.99th=[13435] 00:23:39.689 bw ( KiB/s): min=29976, max=87928, per=100.00%, avg=81896.27, stdev=10190.45, samples=59 00:23:39.689 iops : min= 7494, max=21982, avg=20474.05, stdev=2547.61, samples=59 00:23:39.689 write: IOPS=10.2k, BW=39.9MiB/s (41.8MB/s)(2393MiB/60002msec); 0 zone resets 00:23:39.689 slat (usec): min=2, max=423, avg= 6.28, stdev= 3.99 00:23:39.689 clat (usec): min=878, max=30400k, avg=6707.56, stdev=329490.06 00:23:39.689 lat (usec): min=883, max=30400k, avg=6713.85, stdev=329490.09 00:23:39.689 clat percentiles (msec): 00:23:39.689 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:23:39.689 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 4], 60.00th=[ 4], 00:23:39.689 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 4], 00:23:39.689 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 10], 00:23:39.689 | 99.99th=[17113] 00:23:39.689 bw ( KiB/s): min=29936, max=90816, per=100.00%, avg=81796.66, stdev=10165.24, samples=59 00:23:39.689 iops : min= 7484, max=22704, avg=20449.15, stdev=2541.31, samples=59 00:23:39.689 lat (usec) : 1000=0.01% 00:23:39.689 lat (msec) : 2=0.09%, 4=94.78%, 10=5.09%, 20=0.02%, >=2000=0.01% 00:23:39.689 cpu : usr=5.49%, sys=12.03%, ctx=39371, majf=0, minf=13 00:23:39.689 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:23:39.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:39.689 issued rwts: total=613170,612626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:39.689 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:39.689 00:23:39.689 Run status group 0 (all jobs): 00:23:39.689 READ: bw=39.9MiB/s (41.9MB/s), 39.9MiB/s-39.9MiB/s (41.9MB/s-41.9MB/s), io=2395MiB (2512MB), run=60002-60002msec 00:23:39.689 WRITE: bw=39.9MiB/s (41.8MB/s), 39.9MiB/s-39.9MiB/s (41.8MB/s-41.8MB/s), io=2393MiB (2509MB), run=60002-60002msec 00:23:39.689 00:23:39.689 Disk stats (read/write): 00:23:39.689 ublkb1: ios=610822/610192, merge=0/0, ticks=3498781/3979007, in_queue=7477789, util=99.95% 00:23:39.689 17:24:21 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:23:39.689 17:24:21 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.689 17:24:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.689 [2024-07-24 17:24:21.431505] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:23:39.689 [2024-07-24 17:24:21.468710] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:39.689 [2024-07-24 17:24:21.468984] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:23:39.689 [2024-07-24 17:24:21.476694] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:39.689 [2024-07-24 17:24:21.476821] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:23:39.689 [2024-07-24 17:24:21.476843] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:23:39.689 17:24:21 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.689 17:24:21 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:23:39.689 17:24:21 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.689 17:24:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.689 [2024-07-24 17:24:21.491826] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:23:39.689 [2024-07-24 17:24:21.497718] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:23:39.689 [2024-07-24 17:24:21.497793] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:23:39.689 17:24:21 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.689 17:24:21 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:23:39.689 17:24:21 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:23:39.689 17:24:21 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 77024 00:23:39.689 17:24:21 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 77024 ']' 00:23:39.689 17:24:21 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 77024 00:23:39.689 17:24:21 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:23:39.689 17:24:21 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:39.689 17:24:21 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77024 00:23:39.689 killing process with pid 77024 00:23:39.689 17:24:21 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:39.689 17:24:21 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:39.689 17:24:21 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77024' 00:23:39.689 17:24:21 ublk_recovery -- common/autotest_common.sh@969 -- # kill 77024 00:23:39.689 17:24:21 ublk_recovery -- common/autotest_common.sh@974 -- # wait 77024 00:23:39.689 [2024-07-24 17:24:22.486480] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:23:39.689 [2024-07-24 17:24:22.486561] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:23:39.689 00:23:39.689 real 1m5.463s 00:23:39.689 user 1m51.507s 00:23:39.689 sys 0m18.712s 00:23:39.689 17:24:23 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:39.689 17:24:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.689 ************************************ 00:23:39.689 END TEST ublk_recovery 00:23:39.689 ************************************ 00:23:39.689 17:24:23 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:23:39.689 17:24:23 -- spdk/autotest.sh@264 -- # timing_exit lib 00:23:39.689 17:24:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:39.689 17:24:23 -- common/autotest_common.sh@10 -- # set +x 00:23:39.689 17:24:23 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:23:39.689 17:24:23 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:23:39.689 17:24:23 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:23:39.689 17:24:23 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:23:39.689 17:24:23 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:23:39.689 17:24:23 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:23:39.689 17:24:23 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:23:39.689 17:24:23 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:23:39.689 17:24:23 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:23:39.689 17:24:23 -- spdk/autotest.sh@343 -- # '[' 1 -eq 1 ']' 00:23:39.689 17:24:23 -- spdk/autotest.sh@344 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:39.689 17:24:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:39.689 17:24:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:39.689 17:24:23 -- common/autotest_common.sh@10 -- # set +x 00:23:39.689 ************************************ 00:23:39.689 START TEST ftl 00:23:39.689 ************************************ 00:23:39.689 17:24:23 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:39.689 * Looking for test storage... 00:23:39.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:39.689 17:24:23 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:39.689 17:24:23 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:39.689 17:24:23 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:39.689 17:24:23 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:39.689 17:24:23 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:39.689 17:24:23 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:39.689 17:24:23 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:39.689 17:24:23 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:39.689 17:24:23 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:39.689 17:24:23 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:39.689 17:24:23 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:39.689 17:24:23 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:39.689 17:24:23 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:39.689 17:24:23 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:39.689 17:24:23 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:39.689 17:24:23 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:39.689 17:24:23 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:39.689 17:24:23 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:39.689 17:24:23 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:39.689 17:24:23 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:39.689 17:24:23 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:39.689 17:24:23 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:39.689 17:24:23 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:39.689 17:24:23 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:39.689 17:24:23 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:39.689 17:24:23 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:39.689 17:24:23 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:39.689 17:24:23 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:39.689 17:24:23 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:39.689 17:24:23 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:39.689 17:24:23 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:23:39.689 17:24:23 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:23:39.689 17:24:23 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:23:39.689 17:24:23 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:23:39.689 17:24:23 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:39.689 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:39.690 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:39.690 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:39.690 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:39.690 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:39.690 17:24:24 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77812 00:23:39.690 17:24:24 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77812 00:23:39.690 17:24:24 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:23:39.690 17:24:24 ftl -- common/autotest_common.sh@831 -- # '[' -z 77812 ']' 00:23:39.690 17:24:24 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.690 17:24:24 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:39.690 17:24:24 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.690 17:24:24 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:39.690 17:24:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:39.690 [2024-07-24 17:24:24.588566] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:23:39.690 [2024-07-24 17:24:24.588791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77812 ] 00:23:39.690 [2024-07-24 17:24:24.767604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.690 [2024-07-24 17:24:25.065532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.690 17:24:25 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:39.690 17:24:25 ftl -- common/autotest_common.sh@864 -- # return 0 00:23:39.690 17:24:25 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:23:39.690 17:24:25 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:23:40.625 17:24:26 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:23:40.625 17:24:26 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:41.192 17:24:27 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:23:41.192 17:24:27 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:41.192 17:24:27 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:41.450 17:24:27 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:23:41.450 17:24:27 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:23:41.450 17:24:27 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:23:41.450 17:24:27 ftl -- ftl/ftl.sh@50 -- # break 00:23:41.450 17:24:27 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:23:41.450 17:24:27 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:23:41.450 17:24:27 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:41.450 17:24:27 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:41.711 17:24:27 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:23:41.711 17:24:27 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:23:41.711 17:24:27 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:23:41.711 17:24:27 ftl -- ftl/ftl.sh@63 -- # break 00:23:41.711 17:24:27 ftl -- ftl/ftl.sh@66 -- # killprocess 77812 00:23:41.711 17:24:27 ftl -- common/autotest_common.sh@950 -- # '[' -z 77812 ']' 00:23:41.711 17:24:27 ftl -- common/autotest_common.sh@954 -- # kill -0 77812 00:23:41.711 17:24:27 ftl -- common/autotest_common.sh@955 -- # uname 00:23:41.711 17:24:27 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:41.711 17:24:27 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77812 00:23:41.711 17:24:27 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:41.711 17:24:27 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:41.711 killing process with pid 77812 00:23:41.711 17:24:27 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77812' 00:23:41.711 17:24:27 ftl -- common/autotest_common.sh@969 -- # kill 77812 00:23:41.711 17:24:27 ftl -- common/autotest_common.sh@974 -- # wait 77812 00:23:43.613 17:24:29 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:23:43.613 17:24:29 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:43.613 17:24:29 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:43.613 17:24:29 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:43.613 17:24:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:43.872 ************************************ 00:23:43.872 START TEST ftl_fio_basic 00:23:43.872 ************************************ 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:43.872 * Looking for test storage... 00:23:43.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77946 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77946 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 77946 ']' 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:43.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.872 17:24:29 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:23:43.873 17:24:29 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.873 17:24:29 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:43.873 17:24:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:43.873 [2024-07-24 17:24:30.083488] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:23:43.873 [2024-07-24 17:24:30.084377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77946 ] 00:23:44.132 [2024-07-24 17:24:30.259492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:44.390 [2024-07-24 17:24:30.472686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.390 [2024-07-24 17:24:30.472731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.390 [2024-07-24 17:24:30.472734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.323 17:24:31 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:45.323 17:24:31 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:23:45.323 17:24:31 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:45.323 17:24:31 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:23:45.323 17:24:31 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:45.323 17:24:31 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:23:45.323 17:24:31 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:23:45.323 17:24:31 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:45.581 17:24:31 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:45.581 17:24:31 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:23:45.581 17:24:31 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:45.581 17:24:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:45.581 17:24:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:45.581 17:24:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:23:45.581 17:24:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:23:45.581 17:24:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:45.581 17:24:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:45.581 { 00:23:45.581 "name": "nvme0n1", 00:23:45.581 "aliases": [ 00:23:45.581 "d714e027-d156-43c9-b438-97014e9d825c" 00:23:45.581 ], 00:23:45.581 "product_name": "NVMe disk", 00:23:45.581 "block_size": 4096, 00:23:45.581 "num_blocks": 1310720, 00:23:45.581 "uuid": "d714e027-d156-43c9-b438-97014e9d825c", 00:23:45.581 "assigned_rate_limits": { 00:23:45.581 "rw_ios_per_sec": 0, 00:23:45.581 "rw_mbytes_per_sec": 0, 00:23:45.581 "r_mbytes_per_sec": 0, 00:23:45.581 "w_mbytes_per_sec": 0 00:23:45.581 }, 00:23:45.581 "claimed": false, 00:23:45.581 "zoned": false, 00:23:45.581 "supported_io_types": { 00:23:45.581 "read": true, 00:23:45.581 "write": true, 00:23:45.581 "unmap": true, 00:23:45.581 "flush": true, 00:23:45.581 "reset": true, 00:23:45.581 "nvme_admin": true, 00:23:45.581 "nvme_io": true, 00:23:45.581 "nvme_io_md": false, 00:23:45.581 "write_zeroes": true, 00:23:45.581 "zcopy": false, 00:23:45.581 "get_zone_info": false, 00:23:45.581 "zone_management": false, 00:23:45.581 "zone_append": false, 00:23:45.581 "compare": true, 00:23:45.581 "compare_and_write": false, 00:23:45.581 "abort": true, 00:23:45.581 "seek_hole": false, 00:23:45.581 "seek_data": false, 00:23:45.581 "copy": true, 00:23:45.581 "nvme_iov_md": false 00:23:45.581 }, 00:23:45.581 "driver_specific": { 00:23:45.581 "nvme": [ 00:23:45.581 { 00:23:45.581 "pci_address": "0000:00:11.0", 00:23:45.581 "trid": { 00:23:45.581 "trtype": "PCIe", 00:23:45.581 "traddr": "0000:00:11.0" 00:23:45.581 }, 00:23:45.581 "ctrlr_data": { 00:23:45.582 "cntlid": 0, 00:23:45.582 "vendor_id": "0x1b36", 00:23:45.582 "model_number": "QEMU NVMe Ctrl", 00:23:45.582 "serial_number": "12341", 00:23:45.582 "firmware_revision": "8.0.0", 00:23:45.582 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:45.582 "oacs": { 00:23:45.582 "security": 0, 00:23:45.582 "format": 1, 00:23:45.582 "firmware": 0, 00:23:45.582 "ns_manage": 1 00:23:45.582 }, 00:23:45.582 "multi_ctrlr": false, 00:23:45.582 "ana_reporting": false 00:23:45.582 }, 00:23:45.582 "vs": { 00:23:45.582 "nvme_version": "1.4" 00:23:45.582 }, 00:23:45.582 "ns_data": { 00:23:45.582 "id": 1, 00:23:45.582 "can_share": false 00:23:45.582 } 00:23:45.582 } 00:23:45.582 ], 00:23:45.582 "mp_policy": "active_passive" 00:23:45.582 } 00:23:45.582 } 00:23:45.582 ]' 00:23:45.582 17:24:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:45.859 17:24:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:23:45.859 17:24:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:45.859 17:24:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:45.859 17:24:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:45.859 17:24:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:23:45.859 17:24:31 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:23:45.859 17:24:31 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:45.859 17:24:31 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:23:45.859 17:24:31 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:45.859 17:24:31 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:46.131 17:24:32 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:23:46.131 17:24:32 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:46.390 17:24:32 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=19c10c44-9581-46ea-baaa-bb2d84d124a6 00:23:46.390 17:24:32 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 19c10c44-9581-46ea-baaa-bb2d84d124a6 00:23:46.648 17:24:32 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=9db7d679-dfd1-4cdc-b52a-e68a64f08747 00:23:46.648 17:24:32 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9db7d679-dfd1-4cdc-b52a-e68a64f08747 00:23:46.648 17:24:32 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:23:46.648 17:24:32 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:46.648 17:24:32 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=9db7d679-dfd1-4cdc-b52a-e68a64f08747 00:23:46.648 17:24:32 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:23:46.648 17:24:32 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 9db7d679-dfd1-4cdc-b52a-e68a64f08747 00:23:46.648 17:24:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=9db7d679-dfd1-4cdc-b52a-e68a64f08747 00:23:46.648 17:24:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:46.648 17:24:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:23:46.648 17:24:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:23:46.649 17:24:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9db7d679-dfd1-4cdc-b52a-e68a64f08747 00:23:46.907 17:24:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:46.907 { 00:23:46.907 "name": "9db7d679-dfd1-4cdc-b52a-e68a64f08747", 00:23:46.907 "aliases": [ 00:23:46.907 "lvs/nvme0n1p0" 00:23:46.907 ], 00:23:46.907 "product_name": "Logical Volume", 00:23:46.907 "block_size": 4096, 00:23:46.907 "num_blocks": 26476544, 00:23:46.907 "uuid": "9db7d679-dfd1-4cdc-b52a-e68a64f08747", 00:23:46.907 "assigned_rate_limits": { 00:23:46.907 "rw_ios_per_sec": 0, 00:23:46.907 "rw_mbytes_per_sec": 0, 00:23:46.907 "r_mbytes_per_sec": 0, 00:23:46.907 "w_mbytes_per_sec": 0 00:23:46.907 }, 00:23:46.907 "claimed": false, 00:23:46.907 "zoned": false, 00:23:46.907 "supported_io_types": { 00:23:46.907 "read": true, 00:23:46.907 "write": true, 00:23:46.907 "unmap": true, 00:23:46.907 "flush": false, 00:23:46.907 "reset": true, 00:23:46.907 "nvme_admin": false, 00:23:46.907 "nvme_io": false, 00:23:46.907 "nvme_io_md": false, 00:23:46.907 "write_zeroes": true, 00:23:46.907 "zcopy": false, 00:23:46.907 "get_zone_info": false, 00:23:46.907 "zone_management": false, 00:23:46.907 "zone_append": false, 00:23:46.907 "compare": false, 00:23:46.907 "compare_and_write": false, 00:23:46.907 "abort": false, 00:23:46.907 "seek_hole": true, 00:23:46.907 "seek_data": true, 00:23:46.907 "copy": false, 00:23:46.907 "nvme_iov_md": false 00:23:46.907 }, 00:23:46.907 "driver_specific": { 00:23:46.907 "lvol": { 00:23:46.907 "lvol_store_uuid": "19c10c44-9581-46ea-baaa-bb2d84d124a6", 00:23:46.907 "base_bdev": "nvme0n1", 00:23:46.907 "thin_provision": true, 00:23:46.907 "num_allocated_clusters": 0, 00:23:46.907 "snapshot": false, 00:23:46.907 "clone": false, 00:23:46.907 "esnap_clone": false 00:23:46.907 } 00:23:46.907 } 00:23:46.907 } 00:23:46.907 ]' 00:23:46.907 17:24:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:46.907 17:24:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:23:46.908 17:24:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:46.908 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:46.908 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:46.908 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:23:46.908 17:24:33 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:23:46.908 17:24:33 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:23:46.908 17:24:33 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:47.166 17:24:33 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:47.166 17:24:33 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:47.166 17:24:33 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 9db7d679-dfd1-4cdc-b52a-e68a64f08747 00:23:47.166 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=9db7d679-dfd1-4cdc-b52a-e68a64f08747 00:23:47.166 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:47.166 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:23:47.166 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:23:47.166 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9db7d679-dfd1-4cdc-b52a-e68a64f08747 00:23:47.424 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:47.424 { 00:23:47.424 "name": "9db7d679-dfd1-4cdc-b52a-e68a64f08747", 00:23:47.424 "aliases": [ 00:23:47.424 "lvs/nvme0n1p0" 00:23:47.424 ], 00:23:47.424 "product_name": "Logical Volume", 00:23:47.424 "block_size": 4096, 00:23:47.424 "num_blocks": 26476544, 00:23:47.424 "uuid": "9db7d679-dfd1-4cdc-b52a-e68a64f08747", 00:23:47.424 "assigned_rate_limits": { 00:23:47.424 "rw_ios_per_sec": 0, 00:23:47.424 "rw_mbytes_per_sec": 0, 00:23:47.424 "r_mbytes_per_sec": 0, 00:23:47.424 "w_mbytes_per_sec": 0 00:23:47.424 }, 00:23:47.424 "claimed": false, 00:23:47.424 "zoned": false, 00:23:47.424 "supported_io_types": { 00:23:47.424 "read": true, 00:23:47.424 "write": true, 00:23:47.424 "unmap": true, 00:23:47.424 "flush": false, 00:23:47.424 "reset": true, 00:23:47.424 "nvme_admin": false, 00:23:47.424 "nvme_io": false, 00:23:47.424 "nvme_io_md": false, 00:23:47.424 "write_zeroes": true, 00:23:47.424 "zcopy": false, 00:23:47.424 "get_zone_info": false, 00:23:47.424 "zone_management": false, 00:23:47.424 "zone_append": false, 00:23:47.424 "compare": false, 00:23:47.424 "compare_and_write": false, 00:23:47.424 "abort": false, 00:23:47.424 "seek_hole": true, 00:23:47.424 "seek_data": true, 00:23:47.424 "copy": false, 00:23:47.424 "nvme_iov_md": false 00:23:47.424 }, 00:23:47.424 "driver_specific": { 00:23:47.424 "lvol": { 00:23:47.424 "lvol_store_uuid": "19c10c44-9581-46ea-baaa-bb2d84d124a6", 00:23:47.424 "base_bdev": "nvme0n1", 00:23:47.424 "thin_provision": true, 00:23:47.424 "num_allocated_clusters": 0, 00:23:47.424 "snapshot": false, 00:23:47.424 "clone": false, 00:23:47.424 "esnap_clone": false 00:23:47.424 } 00:23:47.424 } 00:23:47.424 } 00:23:47.424 ]' 00:23:47.424 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:47.424 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:23:47.424 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:47.424 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:47.424 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:47.424 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:23:47.424 17:24:33 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:23:47.424 17:24:33 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:47.681 17:24:33 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:23:47.681 17:24:33 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:23:47.681 17:24:33 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:23:47.681 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:23:47.681 17:24:33 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 9db7d679-dfd1-4cdc-b52a-e68a64f08747 00:23:47.681 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=9db7d679-dfd1-4cdc-b52a-e68a64f08747 00:23:47.681 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:47.681 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:23:47.681 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:23:47.681 17:24:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9db7d679-dfd1-4cdc-b52a-e68a64f08747 00:23:47.939 17:24:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:47.939 { 00:23:47.939 "name": "9db7d679-dfd1-4cdc-b52a-e68a64f08747", 00:23:47.939 "aliases": [ 00:23:47.939 "lvs/nvme0n1p0" 00:23:47.939 ], 00:23:47.939 "product_name": "Logical Volume", 00:23:47.939 "block_size": 4096, 00:23:47.939 "num_blocks": 26476544, 00:23:47.939 "uuid": "9db7d679-dfd1-4cdc-b52a-e68a64f08747", 00:23:47.939 "assigned_rate_limits": { 00:23:47.939 "rw_ios_per_sec": 0, 00:23:47.939 "rw_mbytes_per_sec": 0, 00:23:47.939 "r_mbytes_per_sec": 0, 00:23:47.939 "w_mbytes_per_sec": 0 00:23:47.939 }, 00:23:47.939 "claimed": false, 00:23:47.939 "zoned": false, 00:23:47.939 "supported_io_types": { 00:23:47.939 "read": true, 00:23:47.939 "write": true, 00:23:47.939 "unmap": true, 00:23:47.939 "flush": false, 00:23:47.939 "reset": true, 00:23:47.939 "nvme_admin": false, 00:23:47.939 "nvme_io": false, 00:23:47.939 "nvme_io_md": false, 00:23:47.939 "write_zeroes": true, 00:23:47.939 "zcopy": false, 00:23:47.939 "get_zone_info": false, 00:23:47.939 "zone_management": false, 00:23:47.939 "zone_append": false, 00:23:47.939 "compare": false, 00:23:47.939 "compare_and_write": false, 00:23:47.939 "abort": false, 00:23:47.939 "seek_hole": true, 00:23:47.939 "seek_data": true, 00:23:47.939 "copy": false, 00:23:47.939 "nvme_iov_md": false 00:23:47.939 }, 00:23:47.939 "driver_specific": { 00:23:47.939 "lvol": { 00:23:47.939 "lvol_store_uuid": "19c10c44-9581-46ea-baaa-bb2d84d124a6", 00:23:47.939 "base_bdev": "nvme0n1", 00:23:47.939 "thin_provision": true, 00:23:47.939 "num_allocated_clusters": 0, 00:23:47.939 "snapshot": false, 00:23:47.939 "clone": false, 00:23:47.939 "esnap_clone": false 00:23:47.939 } 00:23:47.939 } 00:23:47.939 } 00:23:47.939 ]' 00:23:47.939 17:24:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:47.939 17:24:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:23:47.939 17:24:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:48.197 17:24:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:48.197 17:24:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:48.197 17:24:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:23:48.198 17:24:34 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:23:48.198 17:24:34 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:23:48.198 17:24:34 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9db7d679-dfd1-4cdc-b52a-e68a64f08747 -c nvc0n1p0 --l2p_dram_limit 60 00:23:48.457 [2024-07-24 17:24:34.459196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.457 [2024-07-24 17:24:34.459315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:48.457 [2024-07-24 17:24:34.459338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:48.457 [2024-07-24 17:24:34.459352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.457 [2024-07-24 17:24:34.459437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.457 [2024-07-24 17:24:34.459457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:48.457 [2024-07-24 17:24:34.459470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:48.457 [2024-07-24 17:24:34.459483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.457 [2024-07-24 17:24:34.459517] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:48.457 [2024-07-24 17:24:34.460591] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:48.457 [2024-07-24 17:24:34.460631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.457 [2024-07-24 17:24:34.460678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:48.457 [2024-07-24 17:24:34.460693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.121 ms 00:23:48.457 [2024-07-24 17:24:34.460707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.457 [2024-07-24 17:24:34.460847] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2bc0a5e1-bead-42ca-8330-723cb8021c2a 00:23:48.457 [2024-07-24 17:24:34.462748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.457 [2024-07-24 17:24:34.462786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:48.457 [2024-07-24 17:24:34.462805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:48.457 [2024-07-24 17:24:34.462817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.457 [2024-07-24 17:24:34.472539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.457 [2024-07-24 17:24:34.472584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:48.457 [2024-07-24 17:24:34.472623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.647 ms 00:23:48.457 [2024-07-24 17:24:34.472634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.457 [2024-07-24 17:24:34.473029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.457 [2024-07-24 17:24:34.473068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:48.457 [2024-07-24 17:24:34.473087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:23:48.457 [2024-07-24 17:24:34.473099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.457 [2024-07-24 17:24:34.473208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.458 [2024-07-24 17:24:34.473226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:48.458 [2024-07-24 17:24:34.473242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:48.458 [2024-07-24 17:24:34.473265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.458 [2024-07-24 17:24:34.473307] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:48.458 [2024-07-24 17:24:34.478298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.458 [2024-07-24 17:24:34.478346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:48.458 [2024-07-24 17:24:34.478377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.004 ms 00:23:48.458 [2024-07-24 17:24:34.478391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.458 [2024-07-24 17:24:34.478440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.458 [2024-07-24 17:24:34.478457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:48.458 [2024-07-24 17:24:34.478469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:48.458 [2024-07-24 17:24:34.478482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.458 [2024-07-24 17:24:34.478530] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:48.458 [2024-07-24 17:24:34.478724] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:48.458 [2024-07-24 17:24:34.478747] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:48.458 [2024-07-24 17:24:34.478787] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:48.458 [2024-07-24 17:24:34.478813] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:48.458 [2024-07-24 17:24:34.478831] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:48.458 [2024-07-24 17:24:34.478843] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:48.458 [2024-07-24 17:24:34.478855] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:48.458 [2024-07-24 17:24:34.478869] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:48.458 [2024-07-24 17:24:34.478882] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:48.458 [2024-07-24 17:24:34.478894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.458 [2024-07-24 17:24:34.478907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:48.458 [2024-07-24 17:24:34.478946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:23:48.458 [2024-07-24 17:24:34.478969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.458 [2024-07-24 17:24:34.479068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.458 [2024-07-24 17:24:34.479086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:48.458 [2024-07-24 17:24:34.479097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:48.458 [2024-07-24 17:24:34.479111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.458 [2024-07-24 17:24:34.479235] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:48.458 [2024-07-24 17:24:34.479272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:48.458 [2024-07-24 17:24:34.479284] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:48.458 [2024-07-24 17:24:34.479298] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:48.458 [2024-07-24 17:24:34.479309] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:48.458 [2024-07-24 17:24:34.479321] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:48.458 [2024-07-24 17:24:34.479331] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:48.458 [2024-07-24 17:24:34.479343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:48.458 [2024-07-24 17:24:34.479353] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:48.458 [2024-07-24 17:24:34.479365] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:48.458 [2024-07-24 17:24:34.479375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:48.458 [2024-07-24 17:24:34.479389] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:48.458 [2024-07-24 17:24:34.479399] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:48.458 [2024-07-24 17:24:34.479411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:48.458 [2024-07-24 17:24:34.479421] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:48.458 [2024-07-24 17:24:34.479434] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:48.458 [2024-07-24 17:24:34.479443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:48.458 [2024-07-24 17:24:34.479458] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:48.458 [2024-07-24 17:24:34.479467] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:48.458 [2024-07-24 17:24:34.479479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:48.458 [2024-07-24 17:24:34.479489] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:48.458 [2024-07-24 17:24:34.479501] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:48.458 [2024-07-24 17:24:34.479511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:48.458 [2024-07-24 17:24:34.479522] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:48.458 [2024-07-24 17:24:34.479532] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:48.458 [2024-07-24 17:24:34.479544] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:48.458 [2024-07-24 17:24:34.479559] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:48.458 [2024-07-24 17:24:34.479572] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:48.458 [2024-07-24 17:24:34.479581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:48.458 [2024-07-24 17:24:34.479593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:48.458 [2024-07-24 17:24:34.479603] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:48.458 [2024-07-24 17:24:34.479615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:48.458 [2024-07-24 17:24:34.479630] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:48.458 [2024-07-24 17:24:34.479644] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:48.458 [2024-07-24 17:24:34.479654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:48.458 [2024-07-24 17:24:34.479666] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:48.458 [2024-07-24 17:24:34.479697] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:48.458 [2024-07-24 17:24:34.479716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:48.458 [2024-07-24 17:24:34.479727] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:48.458 [2024-07-24 17:24:34.479739] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:48.458 [2024-07-24 17:24:34.479749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:48.458 [2024-07-24 17:24:34.479776] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:48.458 [2024-07-24 17:24:34.479787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:48.458 [2024-07-24 17:24:34.479799] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:48.458 [2024-07-24 17:24:34.479810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:48.458 [2024-07-24 17:24:34.479842] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:48.458 [2024-07-24 17:24:34.479853] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:48.458 [2024-07-24 17:24:34.479867] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:48.458 [2024-07-24 17:24:34.479877] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:48.458 [2024-07-24 17:24:34.479892] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:48.458 [2024-07-24 17:24:34.479902] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:48.458 [2024-07-24 17:24:34.479914] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:48.458 [2024-07-24 17:24:34.479924] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:48.458 [2024-07-24 17:24:34.479940] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:48.458 [2024-07-24 17:24:34.479954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:48.458 [2024-07-24 17:24:34.479972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:48.458 [2024-07-24 17:24:34.479983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:48.458 [2024-07-24 17:24:34.479996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:48.458 [2024-07-24 17:24:34.480043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:48.459 [2024-07-24 17:24:34.480060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:48.459 [2024-07-24 17:24:34.480071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:48.459 [2024-07-24 17:24:34.480085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:48.459 [2024-07-24 17:24:34.480097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:48.459 [2024-07-24 17:24:34.480110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:48.459 [2024-07-24 17:24:34.480126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:48.459 [2024-07-24 17:24:34.480143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:48.459 [2024-07-24 17:24:34.480158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:48.459 [2024-07-24 17:24:34.480171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:48.459 [2024-07-24 17:24:34.480183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:48.459 [2024-07-24 17:24:34.480197] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:48.459 [2024-07-24 17:24:34.480209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:48.459 [2024-07-24 17:24:34.480224] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:48.459 [2024-07-24 17:24:34.480236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:48.459 [2024-07-24 17:24:34.480249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:48.459 [2024-07-24 17:24:34.480261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:48.459 [2024-07-24 17:24:34.480276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.459 [2024-07-24 17:24:34.480288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:48.459 [2024-07-24 17:24:34.480302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.098 ms 00:23:48.459 [2024-07-24 17:24:34.480314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.459 [2024-07-24 17:24:34.480398] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:48.459 [2024-07-24 17:24:34.480415] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:52.646 [2024-07-24 17:24:38.139501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.646 [2024-07-24 17:24:38.139596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:52.646 [2024-07-24 17:24:38.139639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3659.109 ms 00:23:52.646 [2024-07-24 17:24:38.139653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.646 [2024-07-24 17:24:38.176537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.646 [2024-07-24 17:24:38.176598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:52.646 [2024-07-24 17:24:38.176638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.512 ms 00:23:52.646 [2024-07-24 17:24:38.176650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.646 [2024-07-24 17:24:38.176887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.646 [2024-07-24 17:24:38.176908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:52.646 [2024-07-24 17:24:38.176924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:52.646 [2024-07-24 17:24:38.176939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.646 [2024-07-24 17:24:38.228824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.646 [2024-07-24 17:24:38.228884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:52.646 [2024-07-24 17:24:38.228924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.784 ms 00:23:52.646 [2024-07-24 17:24:38.228937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.646 [2024-07-24 17:24:38.229046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.646 [2024-07-24 17:24:38.229067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:52.646 [2024-07-24 17:24:38.229082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:52.646 [2024-07-24 17:24:38.229094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.647 [2024-07-24 17:24:38.229769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.647 [2024-07-24 17:24:38.229793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:52.647 [2024-07-24 17:24:38.229809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:23:52.647 [2024-07-24 17:24:38.229821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.647 [2024-07-24 17:24:38.230021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.647 [2024-07-24 17:24:38.230046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:52.647 [2024-07-24 17:24:38.230063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:23:52.647 [2024-07-24 17:24:38.230074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.647 [2024-07-24 17:24:38.251640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.647 [2024-07-24 17:24:38.251718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:52.647 [2024-07-24 17:24:38.251758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.523 ms 00:23:52.647 [2024-07-24 17:24:38.251771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.647 [2024-07-24 17:24:38.266248] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:52.647 [2024-07-24 17:24:38.288568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.647 [2024-07-24 17:24:38.288700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:52.647 [2024-07-24 17:24:38.288726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.645 ms 00:23:52.647 [2024-07-24 17:24:38.288742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.647 [2024-07-24 17:24:38.353339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.647 [2024-07-24 17:24:38.353416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:52.647 [2024-07-24 17:24:38.353456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.525 ms 00:23:52.647 [2024-07-24 17:24:38.353471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.647 [2024-07-24 17:24:38.353813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.647 [2024-07-24 17:24:38.353844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:52.647 [2024-07-24 17:24:38.353860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:23:52.647 [2024-07-24 17:24:38.353878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.647 [2024-07-24 17:24:38.383319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.647 [2024-07-24 17:24:38.383403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:52.647 [2024-07-24 17:24:38.383425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.351 ms 00:23:52.647 [2024-07-24 17:24:38.383440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.647 [2024-07-24 17:24:38.412977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.647 [2024-07-24 17:24:38.413055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:52.647 [2024-07-24 17:24:38.413077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.470 ms 00:23:52.647 [2024-07-24 17:24:38.413091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.647 [2024-07-24 17:24:38.413942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.647 [2024-07-24 17:24:38.413974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:52.647 [2024-07-24 17:24:38.413989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:23:52.647 [2024-07-24 17:24:38.414004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.647 [2024-07-24 17:24:38.498824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.647 [2024-07-24 17:24:38.498928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:52.647 [2024-07-24 17:24:38.498950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.734 ms 00:23:52.647 [2024-07-24 17:24:38.498969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.647 [2024-07-24 17:24:38.528938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.647 [2024-07-24 17:24:38.529016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:52.647 [2024-07-24 17:24:38.529037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.912 ms 00:23:52.647 [2024-07-24 17:24:38.529054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.647 [2024-07-24 17:24:38.558318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.647 [2024-07-24 17:24:38.558429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:52.647 [2024-07-24 17:24:38.558450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.206 ms 00:23:52.647 [2024-07-24 17:24:38.558464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.647 [2024-07-24 17:24:38.586946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.647 [2024-07-24 17:24:38.587031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:52.647 [2024-07-24 17:24:38.587052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.424 ms 00:23:52.647 [2024-07-24 17:24:38.587066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.647 [2024-07-24 17:24:38.587143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.647 [2024-07-24 17:24:38.587165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:52.647 [2024-07-24 17:24:38.587179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:52.647 [2024-07-24 17:24:38.587196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.647 [2024-07-24 17:24:38.587358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.647 [2024-07-24 17:24:38.587381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:52.647 [2024-07-24 17:24:38.587394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:52.647 [2024-07-24 17:24:38.587407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.647 [2024-07-24 17:24:38.589089] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4129.278 ms, result 0 00:23:52.647 { 00:23:52.647 "name": "ftl0", 00:23:52.647 "uuid": "2bc0a5e1-bead-42ca-8330-723cb8021c2a" 00:23:52.647 } 00:23:52.647 17:24:38 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:23:52.647 17:24:38 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:23:52.647 17:24:38 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:52.647 17:24:38 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:23:52.647 17:24:38 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:52.647 17:24:38 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:52.647 17:24:38 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:52.647 17:24:38 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:23:52.906 [ 00:23:52.906 { 00:23:52.906 "name": "ftl0", 00:23:52.906 "aliases": [ 00:23:52.906 "2bc0a5e1-bead-42ca-8330-723cb8021c2a" 00:23:52.906 ], 00:23:52.906 "product_name": "FTL disk", 00:23:52.906 "block_size": 4096, 00:23:52.906 "num_blocks": 20971520, 00:23:52.906 "uuid": "2bc0a5e1-bead-42ca-8330-723cb8021c2a", 00:23:52.906 "assigned_rate_limits": { 00:23:52.906 "rw_ios_per_sec": 0, 00:23:52.906 "rw_mbytes_per_sec": 0, 00:23:52.906 "r_mbytes_per_sec": 0, 00:23:52.906 "w_mbytes_per_sec": 0 00:23:52.906 }, 00:23:52.906 "claimed": false, 00:23:52.906 "zoned": false, 00:23:52.906 "supported_io_types": { 00:23:52.906 "read": true, 00:23:52.906 "write": true, 00:23:52.906 "unmap": true, 00:23:52.906 "flush": true, 00:23:52.906 "reset": false, 00:23:52.906 "nvme_admin": false, 00:23:52.906 "nvme_io": false, 00:23:52.906 "nvme_io_md": false, 00:23:52.906 "write_zeroes": true, 00:23:52.906 "zcopy": false, 00:23:52.906 "get_zone_info": false, 00:23:52.906 "zone_management": false, 00:23:52.906 "zone_append": false, 00:23:52.906 "compare": false, 00:23:52.906 "compare_and_write": false, 00:23:52.906 "abort": false, 00:23:52.906 "seek_hole": false, 00:23:52.906 "seek_data": false, 00:23:52.906 "copy": false, 00:23:52.906 "nvme_iov_md": false 00:23:52.906 }, 00:23:52.906 "driver_specific": { 00:23:52.906 "ftl": { 00:23:52.906 "base_bdev": "9db7d679-dfd1-4cdc-b52a-e68a64f08747", 00:23:52.906 "cache": "nvc0n1p0" 00:23:52.906 } 00:23:52.906 } 00:23:52.906 } 00:23:52.906 ] 00:23:52.906 17:24:39 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:23:52.906 17:24:39 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:23:52.906 17:24:39 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:53.164 17:24:39 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:23:53.164 17:24:39 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:53.423 [2024-07-24 17:24:39.549633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.423 [2024-07-24 17:24:39.549740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:53.423 [2024-07-24 17:24:39.549783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:53.423 [2024-07-24 17:24:39.549795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.423 [2024-07-24 17:24:39.549847] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:53.423 [2024-07-24 17:24:39.553556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.423 [2024-07-24 17:24:39.553593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:53.423 [2024-07-24 17:24:39.553627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.684 ms 00:23:53.423 [2024-07-24 17:24:39.553640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.423 [2024-07-24 17:24:39.554194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.423 [2024-07-24 17:24:39.554242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:53.423 [2024-07-24 17:24:39.554260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:23:53.423 [2024-07-24 17:24:39.554289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.423 [2024-07-24 17:24:39.557427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.423 [2024-07-24 17:24:39.557461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:53.423 [2024-07-24 17:24:39.557492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.095 ms 00:23:53.423 [2024-07-24 17:24:39.557504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.423 [2024-07-24 17:24:39.563636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.423 [2024-07-24 17:24:39.563723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:53.423 [2024-07-24 17:24:39.563740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.098 ms 00:23:53.423 [2024-07-24 17:24:39.563757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.423 [2024-07-24 17:24:39.593575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.423 [2024-07-24 17:24:39.593729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:53.423 [2024-07-24 17:24:39.593753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.681 ms 00:23:53.423 [2024-07-24 17:24:39.593768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.423 [2024-07-24 17:24:39.612244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.423 [2024-07-24 17:24:39.612346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:53.423 [2024-07-24 17:24:39.612368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.371 ms 00:23:53.423 [2024-07-24 17:24:39.612382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.423 [2024-07-24 17:24:39.612791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.423 [2024-07-24 17:24:39.612835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:53.423 [2024-07-24 17:24:39.612851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:23:53.423 [2024-07-24 17:24:39.612865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.423 [2024-07-24 17:24:39.641084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.423 [2024-07-24 17:24:39.641193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:53.423 [2024-07-24 17:24:39.641215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.177 ms 00:23:53.423 [2024-07-24 17:24:39.641228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.683 [2024-07-24 17:24:39.669011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.683 [2024-07-24 17:24:39.669116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:53.683 [2024-07-24 17:24:39.669140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.698 ms 00:23:53.683 [2024-07-24 17:24:39.669154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.683 [2024-07-24 17:24:39.696609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.683 [2024-07-24 17:24:39.696796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:53.683 [2024-07-24 17:24:39.696820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.368 ms 00:23:53.683 [2024-07-24 17:24:39.696852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.683 [2024-07-24 17:24:39.724268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.683 [2024-07-24 17:24:39.724364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:53.683 [2024-07-24 17:24:39.724385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.213 ms 00:23:53.683 [2024-07-24 17:24:39.724399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.683 [2024-07-24 17:24:39.724487] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:53.683 [2024-07-24 17:24:39.724516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.724980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:53.683 [2024-07-24 17:24:39.725409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.725988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.726000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.726015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.726027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.726042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.726069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:53.684 [2024-07-24 17:24:39.726095] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:53.684 [2024-07-24 17:24:39.726107] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2bc0a5e1-bead-42ca-8330-723cb8021c2a 00:23:53.684 [2024-07-24 17:24:39.726124] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:53.684 [2024-07-24 17:24:39.726135] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:53.684 [2024-07-24 17:24:39.726151] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:53.684 [2024-07-24 17:24:39.726162] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:53.684 [2024-07-24 17:24:39.726176] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:53.684 [2024-07-24 17:24:39.726188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:53.684 [2024-07-24 17:24:39.726201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:53.684 [2024-07-24 17:24:39.726211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:53.684 [2024-07-24 17:24:39.726223] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:53.684 [2024-07-24 17:24:39.726234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.684 [2024-07-24 17:24:39.726249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:53.684 [2024-07-24 17:24:39.726261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.749 ms 00:23:53.684 [2024-07-24 17:24:39.726275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.684 [2024-07-24 17:24:39.743092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.684 [2024-07-24 17:24:39.743172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:53.684 [2024-07-24 17:24:39.743193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.714 ms 00:23:53.684 [2024-07-24 17:24:39.743238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.684 [2024-07-24 17:24:39.743791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.684 [2024-07-24 17:24:39.743817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:53.684 [2024-07-24 17:24:39.743832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:23:53.684 [2024-07-24 17:24:39.743850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.684 [2024-07-24 17:24:39.801403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.684 [2024-07-24 17:24:39.801519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:53.684 [2024-07-24 17:24:39.801541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.684 [2024-07-24 17:24:39.801555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.684 [2024-07-24 17:24:39.801656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.684 [2024-07-24 17:24:39.801712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:53.684 [2024-07-24 17:24:39.801726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.684 [2024-07-24 17:24:39.801743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.684 [2024-07-24 17:24:39.801905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.684 [2024-07-24 17:24:39.801930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:53.684 [2024-07-24 17:24:39.801943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.684 [2024-07-24 17:24:39.801957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.684 [2024-07-24 17:24:39.801994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.684 [2024-07-24 17:24:39.802015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:53.684 [2024-07-24 17:24:39.802027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.684 [2024-07-24 17:24:39.802040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.684 [2024-07-24 17:24:39.903907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.684 [2024-07-24 17:24:39.903990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:53.684 [2024-07-24 17:24:39.904011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.684 [2024-07-24 17:24:39.904026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.943 [2024-07-24 17:24:39.990382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.943 [2024-07-24 17:24:39.990489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:53.943 [2024-07-24 17:24:39.990510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.943 [2024-07-24 17:24:39.990524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.943 [2024-07-24 17:24:39.990756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.943 [2024-07-24 17:24:39.990781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:53.943 [2024-07-24 17:24:39.990795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.943 [2024-07-24 17:24:39.990809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.943 [2024-07-24 17:24:39.990894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.943 [2024-07-24 17:24:39.990949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:53.943 [2024-07-24 17:24:39.990965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.943 [2024-07-24 17:24:39.990980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.943 [2024-07-24 17:24:39.991136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.943 [2024-07-24 17:24:39.991160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:53.943 [2024-07-24 17:24:39.991173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.943 [2024-07-24 17:24:39.991187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.943 [2024-07-24 17:24:39.991266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.943 [2024-07-24 17:24:39.991288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:53.943 [2024-07-24 17:24:39.991300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.943 [2024-07-24 17:24:39.991313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.943 [2024-07-24 17:24:39.991369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.943 [2024-07-24 17:24:39.991391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:53.943 [2024-07-24 17:24:39.991402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.943 [2024-07-24 17:24:39.991415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.943 [2024-07-24 17:24:39.991481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.943 [2024-07-24 17:24:39.991503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:53.943 [2024-07-24 17:24:39.991515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.943 [2024-07-24 17:24:39.991528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.943 [2024-07-24 17:24:39.991774] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 442.069 ms, result 0 00:23:53.943 true 00:23:53.943 17:24:40 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77946 00:23:53.943 17:24:40 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 77946 ']' 00:23:53.943 17:24:40 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 77946 00:23:53.943 17:24:40 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:23:53.943 17:24:40 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:53.943 17:24:40 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77946 00:23:53.943 17:24:40 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:53.943 17:24:40 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:53.943 17:24:40 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77946' 00:23:53.943 killing process with pid 77946 00:23:53.943 17:24:40 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 77946 00:23:53.943 17:24:40 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 77946 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:59.206 17:24:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:59.206 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:23:59.206 fio-3.35 00:23:59.206 Starting 1 thread 00:24:04.486 00:24:04.486 test: (groupid=0, jobs=1): err= 0: pid=78163: Wed Jul 24 17:24:50 2024 00:24:04.486 read: IOPS=882, BW=58.6MiB/s (61.5MB/s)(255MiB/4343msec) 00:24:04.487 slat (nsec): min=5352, max=47375, avg=7627.41, stdev=3745.26 00:24:04.487 clat (usec): min=333, max=2719, avg=503.74, stdev=65.86 00:24:04.487 lat (usec): min=339, max=2732, avg=511.37, stdev=66.48 00:24:04.487 clat percentiles (usec): 00:24:04.487 | 1.00th=[ 392], 5.00th=[ 429], 10.00th=[ 449], 20.00th=[ 461], 00:24:04.487 | 30.00th=[ 474], 40.00th=[ 482], 50.00th=[ 494], 60.00th=[ 502], 00:24:04.487 | 70.00th=[ 523], 80.00th=[ 545], 90.00th=[ 578], 95.00th=[ 611], 00:24:04.487 | 99.00th=[ 668], 99.50th=[ 685], 99.90th=[ 783], 99.95th=[ 840], 00:24:04.487 | 99.99th=[ 2704] 00:24:04.487 write: IOPS=888, BW=59.0MiB/s (61.9MB/s)(256MiB/4339msec); 0 zone resets 00:24:04.487 slat (usec): min=18, max=111, avg=24.41, stdev= 7.23 00:24:04.487 clat (usec): min=389, max=997, avg=579.67, stdev=66.85 00:24:04.487 lat (usec): min=420, max=1027, avg=604.08, stdev=67.45 00:24:04.487 clat percentiles (usec): 00:24:04.487 | 1.00th=[ 461], 5.00th=[ 490], 10.00th=[ 506], 20.00th=[ 537], 00:24:04.487 | 30.00th=[ 545], 40.00th=[ 562], 50.00th=[ 570], 60.00th=[ 586], 00:24:04.487 | 70.00th=[ 594], 80.00th=[ 619], 90.00th=[ 660], 95.00th=[ 693], 00:24:04.487 | 99.00th=[ 848], 99.50th=[ 889], 99.90th=[ 979], 99.95th=[ 996], 00:24:04.487 | 99.99th=[ 996] 00:24:04.487 bw ( KiB/s): min=59160, max=61744, per=100.00%, avg=60435.00, stdev=893.11, samples=8 00:24:04.487 iops : min= 870, max= 908, avg=888.75, stdev=13.13, samples=8 00:24:04.487 lat (usec) : 500=32.19%, 750=66.68%, 1000=1.12% 00:24:04.487 lat (msec) : 4=0.01% 00:24:04.487 cpu : usr=99.03%, sys=0.21%, ctx=7, majf=0, minf=1171 00:24:04.487 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:04.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.487 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.487 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:04.487 00:24:04.487 Run status group 0 (all jobs): 00:24:04.487 READ: bw=58.6MiB/s (61.5MB/s), 58.6MiB/s-58.6MiB/s (61.5MB/s-61.5MB/s), io=255MiB (267MB), run=4343-4343msec 00:24:04.487 WRITE: bw=59.0MiB/s (61.9MB/s), 59.0MiB/s-59.0MiB/s (61.9MB/s-61.9MB/s), io=256MiB (269MB), run=4339-4339msec 00:24:05.863 ----------------------------------------------------- 00:24:05.863 Suppressions used: 00:24:05.863 count bytes template 00:24:05.863 1 5 /usr/src/fio/parse.c 00:24:05.863 1 8 libtcmalloc_minimal.so 00:24:05.863 1 904 libcrypto.so 00:24:05.863 ----------------------------------------------------- 00:24:05.863 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:06.122 17:24:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:24:06.380 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:06.380 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:06.380 fio-3.35 00:24:06.380 Starting 2 threads 00:24:45.086 00:24:45.086 first_half: (groupid=0, jobs=1): err= 0: pid=78270: Wed Jul 24 17:25:24 2024 00:24:45.086 read: IOPS=2094, BW=8378KiB/s (8579kB/s)(255MiB/31143msec) 00:24:45.086 slat (nsec): min=4522, max=90879, avg=7596.09, stdev=2737.23 00:24:45.086 clat (usec): min=783, max=312187, avg=41459.14, stdev=18215.75 00:24:45.086 lat (usec): min=793, max=312195, avg=41466.74, stdev=18215.92 00:24:45.086 clat percentiles (msec): 00:24:45.086 | 1.00th=[ 5], 5.00th=[ 37], 10.00th=[ 38], 20.00th=[ 39], 00:24:45.086 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 40], 60.00th=[ 40], 00:24:45.087 | 70.00th=[ 42], 80.00th=[ 43], 90.00th=[ 46], 95.00th=[ 48], 00:24:45.087 | 99.00th=[ 142], 99.50th=[ 188], 99.90th=[ 243], 99.95th=[ 279], 00:24:45.087 | 99.99th=[ 296] 00:24:45.087 write: IOPS=2609, BW=10.2MiB/s (10.7MB/s)(256MiB/25117msec); 0 zone resets 00:24:45.087 slat (usec): min=5, max=237, avg=10.32, stdev= 6.05 00:24:45.087 clat (usec): min=440, max=138295, avg=19533.85, stdev=32655.49 00:24:45.087 lat (usec): min=454, max=138306, avg=19544.16, stdev=32655.74 00:24:45.087 clat percentiles (usec): 00:24:45.087 | 1.00th=[ 955], 5.00th=[ 1205], 10.00th=[ 1369], 20.00th=[ 1614], 00:24:45.087 | 30.00th=[ 1876], 40.00th=[ 2245], 50.00th=[ 4178], 60.00th=[ 6587], 00:24:45.087 | 70.00th=[ 12256], 80.00th=[ 16712], 90.00th=[ 88605], 95.00th=[ 98042], 00:24:45.087 | 99.00th=[107480], 99.50th=[113771], 99.90th=[126354], 99.95th=[133694], 00:24:45.087 | 99.99th=[137364] 00:24:45.087 bw ( KiB/s): min= 24, max=32248, per=76.11%, avg=15887.52, stdev=9167.86, samples=33 00:24:45.087 iops : min= 6, max= 8062, avg=3971.88, stdev=2291.97, samples=33 00:24:45.087 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.72% 00:24:45.087 lat (msec) : 2=16.60%, 4=7.77%, 10=9.68%, 20=8.04%, 50=46.84% 00:24:45.087 lat (msec) : 100=7.67%, 250=2.62%, 500=0.04% 00:24:45.087 cpu : usr=98.83%, sys=0.39%, ctx=84, majf=0, minf=5536 00:24:45.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:45.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.087 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:45.087 issued rwts: total=65231,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.087 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:45.087 second_half: (groupid=0, jobs=1): err= 0: pid=78271: Wed Jul 24 17:25:24 2024 00:24:45.087 read: IOPS=2095, BW=8382KiB/s (8583kB/s)(254MiB/31089msec) 00:24:45.087 slat (nsec): min=4422, max=66882, avg=7616.54, stdev=2697.93 00:24:45.087 clat (usec): min=783, max=234169, avg=41517.05, stdev=15473.36 00:24:45.087 lat (usec): min=792, max=234177, avg=41524.67, stdev=15473.45 00:24:45.087 clat percentiles (msec): 00:24:45.087 | 1.00th=[ 5], 5.00th=[ 37], 10.00th=[ 38], 20.00th=[ 39], 00:24:45.087 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 40], 60.00th=[ 40], 00:24:45.087 | 70.00th=[ 42], 80.00th=[ 43], 90.00th=[ 46], 95.00th=[ 50], 00:24:45.087 | 99.00th=[ 128], 99.50th=[ 161], 99.90th=[ 194], 99.95th=[ 218], 00:24:45.087 | 99.99th=[ 232] 00:24:45.087 write: IOPS=3256, BW=12.7MiB/s (13.3MB/s)(256MiB/20126msec); 0 zone resets 00:24:45.087 slat (usec): min=5, max=232, avg=10.43, stdev= 6.56 00:24:45.087 clat (usec): min=496, max=159093, avg=19422.07, stdev=32635.44 00:24:45.087 lat (usec): min=511, max=159101, avg=19432.50, stdev=32635.78 00:24:45.087 clat percentiles (usec): 00:24:45.087 | 1.00th=[ 988], 5.00th=[ 1237], 10.00th=[ 1401], 20.00th=[ 1631], 00:24:45.087 | 30.00th=[ 1860], 40.00th=[ 2147], 50.00th=[ 3458], 60.00th=[ 6652], 00:24:45.087 | 70.00th=[ 12780], 80.00th=[ 16319], 90.00th=[ 88605], 95.00th=[ 98042], 00:24:45.087 | 99.00th=[107480], 99.50th=[115868], 99.90th=[135267], 99.95th=[149947], 00:24:45.087 | 99.99th=[156238] 00:24:45.087 bw ( KiB/s): min= 848, max=34952, per=86.61%, avg=18078.90, stdev=8647.38, samples=29 00:24:45.087 iops : min= 212, max= 8738, avg=4519.72, stdev=2161.85, samples=29 00:24:45.087 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.53% 00:24:45.087 lat (msec) : 2=17.30%, 4=8.48%, 10=7.26%, 20=9.24%, 50=46.73% 00:24:45.087 lat (msec) : 100=7.88%, 250=2.55% 00:24:45.087 cpu : usr=98.95%, sys=0.37%, ctx=59, majf=0, minf=5567 00:24:45.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:45.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.087 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:45.087 issued rwts: total=65147,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.087 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:45.087 00:24:45.087 Run status group 0 (all jobs): 00:24:45.087 READ: bw=16.4MiB/s (17.1MB/s), 8378KiB/s-8382KiB/s (8579kB/s-8583kB/s), io=509MiB (534MB), run=31089-31143msec 00:24:45.087 WRITE: bw=20.4MiB/s (21.4MB/s), 10.2MiB/s-12.7MiB/s (10.7MB/s-13.3MB/s), io=512MiB (537MB), run=20126-25117msec 00:24:45.087 ----------------------------------------------------- 00:24:45.087 Suppressions used: 00:24:45.087 count bytes template 00:24:45.087 2 10 /usr/src/fio/parse.c 00:24:45.087 1 96 /usr/src/fio/iolog.c 00:24:45.087 1 8 libtcmalloc_minimal.so 00:24:45.087 1 904 libcrypto.so 00:24:45.087 ----------------------------------------------------- 00:24:45.087 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:45.087 17:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:45.087 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:45.087 fio-3.35 00:24:45.087 Starting 1 thread 00:24:59.962 00:24:59.962 test: (groupid=0, jobs=1): err= 0: pid=78647: Wed Jul 24 17:25:45 2024 00:24:59.962 read: IOPS=5930, BW=23.2MiB/s (24.3MB/s)(255MiB/10995msec) 00:24:59.962 slat (nsec): min=4443, max=76624, avg=8174.15, stdev=3530.03 00:24:59.962 clat (usec): min=843, max=457601, avg=21569.98, stdev=16682.49 00:24:59.962 lat (usec): min=865, max=457611, avg=21578.15, stdev=16682.48 00:24:59.962 clat percentiles (msec): 00:24:59.962 | 1.00th=[ 19], 5.00th=[ 20], 10.00th=[ 20], 20.00th=[ 21], 00:24:59.962 | 30.00th=[ 21], 40.00th=[ 21], 50.00th=[ 21], 60.00th=[ 21], 00:24:59.962 | 70.00th=[ 21], 80.00th=[ 22], 90.00th=[ 22], 95.00th=[ 22], 00:24:59.962 | 99.00th=[ 27], 99.50th=[ 30], 99.90th=[ 372], 99.95th=[ 380], 00:24:59.962 | 99.99th=[ 380] 00:24:59.962 write: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(256MiB/5693msec); 0 zone resets 00:24:59.962 slat (usec): min=5, max=373, avg=10.85, stdev= 7.34 00:24:59.962 clat (usec): min=676, max=74371, avg=11057.77, stdev=13641.49 00:24:59.962 lat (usec): min=685, max=74382, avg=11068.61, stdev=13641.55 00:24:59.962 clat percentiles (usec): 00:24:59.962 | 1.00th=[ 955], 5.00th=[ 1172], 10.00th=[ 1287], 20.00th=[ 1467], 00:24:59.962 | 30.00th=[ 1680], 40.00th=[ 2147], 50.00th=[ 7570], 60.00th=[ 8717], 00:24:59.962 | 70.00th=[ 9765], 80.00th=[11600], 90.00th=[39584], 95.00th=[41681], 00:24:59.962 | 99.00th=[47449], 99.50th=[49021], 99.90th=[52167], 99.95th=[63177], 00:24:59.962 | 99.99th=[69731] 00:24:59.962 bw ( KiB/s): min=14296, max=62040, per=94.86%, avg=43682.92, stdev=12113.27, samples=12 00:24:59.962 iops : min= 3574, max=15510, avg=10920.67, stdev=3028.30, samples=12 00:24:59.962 lat (usec) : 750=0.01%, 1000=0.77% 00:24:59.962 lat (msec) : 2=18.59%, 4=1.55%, 10=15.04%, 20=17.39%, 50=46.31% 00:24:59.962 lat (msec) : 100=0.15%, 250=0.10%, 500=0.10% 00:24:59.962 cpu : usr=98.20%, sys=0.76%, ctx=30, majf=0, minf=5567 00:24:59.962 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:59.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.962 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:59.962 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:59.962 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:59.962 00:24:59.962 Run status group 0 (all jobs): 00:24:59.962 READ: bw=23.2MiB/s (24.3MB/s), 23.2MiB/s-23.2MiB/s (24.3MB/s-24.3MB/s), io=255MiB (267MB), run=10995-10995msec 00:24:59.962 WRITE: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=256MiB (268MB), run=5693-5693msec 00:25:00.531 ----------------------------------------------------- 00:25:00.531 Suppressions used: 00:25:00.531 count bytes template 00:25:00.531 1 5 /usr/src/fio/parse.c 00:25:00.531 2 192 /usr/src/fio/iolog.c 00:25:00.531 1 8 libtcmalloc_minimal.so 00:25:00.531 1 904 libcrypto.so 00:25:00.531 ----------------------------------------------------- 00:25:00.531 00:25:00.531 17:25:46 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:25:00.531 17:25:46 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:00.531 17:25:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:00.531 17:25:46 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:00.531 Remove shared memory files 00:25:00.531 17:25:46 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:25:00.531 17:25:46 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:00.531 17:25:46 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:25:00.531 17:25:46 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:25:00.531 17:25:46 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid62000 /dev/shm/spdk_tgt_trace.pid76874 00:25:00.531 17:25:46 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:00.531 17:25:46 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:25:00.531 ************************************ 00:25:00.531 END TEST ftl_fio_basic 00:25:00.531 ************************************ 00:25:00.531 00:25:00.531 real 1m16.757s 00:25:00.531 user 2m46.925s 00:25:00.531 sys 0m4.374s 00:25:00.531 17:25:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:00.531 17:25:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:00.531 17:25:46 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:25:00.531 17:25:46 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:25:00.531 17:25:46 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:00.531 17:25:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:00.531 ************************************ 00:25:00.531 START TEST ftl_bdevperf 00:25:00.531 ************************************ 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:25:00.531 * Looking for test storage... 00:25:00.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=78907 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 78907 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 78907 ']' 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:00.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:00.531 17:25:46 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:00.790 [2024-07-24 17:25:46.869940] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:25:00.790 [2024-07-24 17:25:46.870119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78907 ] 00:25:01.048 [2024-07-24 17:25:47.041165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.048 [2024-07-24 17:25:47.239136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.615 17:25:47 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:01.615 17:25:47 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:25:01.615 17:25:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:01.615 17:25:47 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:25:01.615 17:25:47 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:01.615 17:25:47 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:25:01.615 17:25:47 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:25:01.615 17:25:47 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:02.182 17:25:48 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:02.182 17:25:48 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:25:02.182 17:25:48 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:02.182 17:25:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:25:02.182 17:25:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:02.182 17:25:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:25:02.182 17:25:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:25:02.182 17:25:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:02.440 17:25:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:02.440 { 00:25:02.440 "name": "nvme0n1", 00:25:02.440 "aliases": [ 00:25:02.440 "5b5743a3-fd2e-4e20-94f0-591ac4b1e043" 00:25:02.440 ], 00:25:02.440 "product_name": "NVMe disk", 00:25:02.440 "block_size": 4096, 00:25:02.440 "num_blocks": 1310720, 00:25:02.441 "uuid": "5b5743a3-fd2e-4e20-94f0-591ac4b1e043", 00:25:02.441 "assigned_rate_limits": { 00:25:02.441 "rw_ios_per_sec": 0, 00:25:02.441 "rw_mbytes_per_sec": 0, 00:25:02.441 "r_mbytes_per_sec": 0, 00:25:02.441 "w_mbytes_per_sec": 0 00:25:02.441 }, 00:25:02.441 "claimed": true, 00:25:02.441 "claim_type": "read_many_write_one", 00:25:02.441 "zoned": false, 00:25:02.441 "supported_io_types": { 00:25:02.441 "read": true, 00:25:02.441 "write": true, 00:25:02.441 "unmap": true, 00:25:02.441 "flush": true, 00:25:02.441 "reset": true, 00:25:02.441 "nvme_admin": true, 00:25:02.441 "nvme_io": true, 00:25:02.441 "nvme_io_md": false, 00:25:02.441 "write_zeroes": true, 00:25:02.441 "zcopy": false, 00:25:02.441 "get_zone_info": false, 00:25:02.441 "zone_management": false, 00:25:02.441 "zone_append": false, 00:25:02.441 "compare": true, 00:25:02.441 "compare_and_write": false, 00:25:02.441 "abort": true, 00:25:02.441 "seek_hole": false, 00:25:02.441 "seek_data": false, 00:25:02.441 "copy": true, 00:25:02.441 "nvme_iov_md": false 00:25:02.441 }, 00:25:02.441 "driver_specific": { 00:25:02.441 "nvme": [ 00:25:02.441 { 00:25:02.441 "pci_address": "0000:00:11.0", 00:25:02.441 "trid": { 00:25:02.441 "trtype": "PCIe", 00:25:02.441 "traddr": "0000:00:11.0" 00:25:02.441 }, 00:25:02.441 "ctrlr_data": { 00:25:02.441 "cntlid": 0, 00:25:02.441 "vendor_id": "0x1b36", 00:25:02.441 "model_number": "QEMU NVMe Ctrl", 00:25:02.441 "serial_number": "12341", 00:25:02.441 "firmware_revision": "8.0.0", 00:25:02.441 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:02.441 "oacs": { 00:25:02.441 "security": 0, 00:25:02.441 "format": 1, 00:25:02.441 "firmware": 0, 00:25:02.441 "ns_manage": 1 00:25:02.441 }, 00:25:02.441 "multi_ctrlr": false, 00:25:02.441 "ana_reporting": false 00:25:02.441 }, 00:25:02.441 "vs": { 00:25:02.441 "nvme_version": "1.4" 00:25:02.441 }, 00:25:02.441 "ns_data": { 00:25:02.441 "id": 1, 00:25:02.441 "can_share": false 00:25:02.441 } 00:25:02.441 } 00:25:02.441 ], 00:25:02.441 "mp_policy": "active_passive" 00:25:02.441 } 00:25:02.441 } 00:25:02.441 ]' 00:25:02.441 17:25:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:02.441 17:25:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:25:02.441 17:25:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:02.441 17:25:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:25:02.441 17:25:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:25:02.441 17:25:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:25:02.441 17:25:48 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:25:02.441 17:25:48 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:02.441 17:25:48 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:25:02.441 17:25:48 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:02.441 17:25:48 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:02.699 17:25:48 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=19c10c44-9581-46ea-baaa-bb2d84d124a6 00:25:02.699 17:25:48 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:25:02.699 17:25:48 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 19c10c44-9581-46ea-baaa-bb2d84d124a6 00:25:02.958 17:25:49 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:03.217 17:25:49 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=1103dc16-f1f5-422a-a8cc-e2e1d7ddf9cd 00:25:03.217 17:25:49 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1103dc16-f1f5-422a-a8cc-e2e1d7ddf9cd 00:25:03.476 17:25:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=d2313c66-c969-4bdd-a4a4-46b176cb3161 00:25:03.476 17:25:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d2313c66-c969-4bdd-a4a4-46b176cb3161 00:25:03.476 17:25:49 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:25:03.476 17:25:49 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:03.476 17:25:49 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=d2313c66-c969-4bdd-a4a4-46b176cb3161 00:25:03.476 17:25:49 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:25:03.476 17:25:49 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size d2313c66-c969-4bdd-a4a4-46b176cb3161 00:25:03.476 17:25:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=d2313c66-c969-4bdd-a4a4-46b176cb3161 00:25:03.476 17:25:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:03.476 17:25:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:25:03.476 17:25:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:25:03.476 17:25:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d2313c66-c969-4bdd-a4a4-46b176cb3161 00:25:03.735 17:25:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:03.735 { 00:25:03.735 "name": "d2313c66-c969-4bdd-a4a4-46b176cb3161", 00:25:03.735 "aliases": [ 00:25:03.735 "lvs/nvme0n1p0" 00:25:03.735 ], 00:25:03.735 "product_name": "Logical Volume", 00:25:03.735 "block_size": 4096, 00:25:03.735 "num_blocks": 26476544, 00:25:03.735 "uuid": "d2313c66-c969-4bdd-a4a4-46b176cb3161", 00:25:03.735 "assigned_rate_limits": { 00:25:03.735 "rw_ios_per_sec": 0, 00:25:03.735 "rw_mbytes_per_sec": 0, 00:25:03.735 "r_mbytes_per_sec": 0, 00:25:03.735 "w_mbytes_per_sec": 0 00:25:03.735 }, 00:25:03.735 "claimed": false, 00:25:03.735 "zoned": false, 00:25:03.735 "supported_io_types": { 00:25:03.735 "read": true, 00:25:03.735 "write": true, 00:25:03.735 "unmap": true, 00:25:03.735 "flush": false, 00:25:03.735 "reset": true, 00:25:03.735 "nvme_admin": false, 00:25:03.735 "nvme_io": false, 00:25:03.735 "nvme_io_md": false, 00:25:03.735 "write_zeroes": true, 00:25:03.735 "zcopy": false, 00:25:03.735 "get_zone_info": false, 00:25:03.735 "zone_management": false, 00:25:03.735 "zone_append": false, 00:25:03.735 "compare": false, 00:25:03.735 "compare_and_write": false, 00:25:03.735 "abort": false, 00:25:03.735 "seek_hole": true, 00:25:03.735 "seek_data": true, 00:25:03.735 "copy": false, 00:25:03.735 "nvme_iov_md": false 00:25:03.735 }, 00:25:03.735 "driver_specific": { 00:25:03.735 "lvol": { 00:25:03.735 "lvol_store_uuid": "1103dc16-f1f5-422a-a8cc-e2e1d7ddf9cd", 00:25:03.735 "base_bdev": "nvme0n1", 00:25:03.735 "thin_provision": true, 00:25:03.735 "num_allocated_clusters": 0, 00:25:03.735 "snapshot": false, 00:25:03.735 "clone": false, 00:25:03.735 "esnap_clone": false 00:25:03.735 } 00:25:03.735 } 00:25:03.735 } 00:25:03.735 ]' 00:25:03.735 17:25:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:03.735 17:25:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:25:03.735 17:25:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:03.735 17:25:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:03.735 17:25:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:03.735 17:25:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:25:03.735 17:25:49 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:25:03.735 17:25:49 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:25:03.735 17:25:49 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:03.994 17:25:50 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:03.994 17:25:50 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:03.994 17:25:50 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size d2313c66-c969-4bdd-a4a4-46b176cb3161 00:25:03.994 17:25:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=d2313c66-c969-4bdd-a4a4-46b176cb3161 00:25:03.994 17:25:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:03.994 17:25:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:25:03.994 17:25:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:25:03.994 17:25:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d2313c66-c969-4bdd-a4a4-46b176cb3161 00:25:04.253 17:25:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:04.253 { 00:25:04.253 "name": "d2313c66-c969-4bdd-a4a4-46b176cb3161", 00:25:04.253 "aliases": [ 00:25:04.253 "lvs/nvme0n1p0" 00:25:04.253 ], 00:25:04.253 "product_name": "Logical Volume", 00:25:04.253 "block_size": 4096, 00:25:04.253 "num_blocks": 26476544, 00:25:04.253 "uuid": "d2313c66-c969-4bdd-a4a4-46b176cb3161", 00:25:04.253 "assigned_rate_limits": { 00:25:04.253 "rw_ios_per_sec": 0, 00:25:04.253 "rw_mbytes_per_sec": 0, 00:25:04.253 "r_mbytes_per_sec": 0, 00:25:04.253 "w_mbytes_per_sec": 0 00:25:04.253 }, 00:25:04.253 "claimed": false, 00:25:04.253 "zoned": false, 00:25:04.253 "supported_io_types": { 00:25:04.253 "read": true, 00:25:04.253 "write": true, 00:25:04.253 "unmap": true, 00:25:04.253 "flush": false, 00:25:04.253 "reset": true, 00:25:04.253 "nvme_admin": false, 00:25:04.253 "nvme_io": false, 00:25:04.253 "nvme_io_md": false, 00:25:04.253 "write_zeroes": true, 00:25:04.253 "zcopy": false, 00:25:04.253 "get_zone_info": false, 00:25:04.253 "zone_management": false, 00:25:04.253 "zone_append": false, 00:25:04.253 "compare": false, 00:25:04.253 "compare_and_write": false, 00:25:04.253 "abort": false, 00:25:04.253 "seek_hole": true, 00:25:04.253 "seek_data": true, 00:25:04.253 "copy": false, 00:25:04.253 "nvme_iov_md": false 00:25:04.253 }, 00:25:04.253 "driver_specific": { 00:25:04.253 "lvol": { 00:25:04.253 "lvol_store_uuid": "1103dc16-f1f5-422a-a8cc-e2e1d7ddf9cd", 00:25:04.253 "base_bdev": "nvme0n1", 00:25:04.253 "thin_provision": true, 00:25:04.253 "num_allocated_clusters": 0, 00:25:04.253 "snapshot": false, 00:25:04.253 "clone": false, 00:25:04.253 "esnap_clone": false 00:25:04.253 } 00:25:04.253 } 00:25:04.253 } 00:25:04.253 ]' 00:25:04.253 17:25:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:04.253 17:25:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:25:04.253 17:25:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:04.512 17:25:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:04.512 17:25:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:04.512 17:25:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:25:04.512 17:25:50 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:25:04.512 17:25:50 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:04.771 17:25:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:25:04.771 17:25:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size d2313c66-c969-4bdd-a4a4-46b176cb3161 00:25:04.771 17:25:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=d2313c66-c969-4bdd-a4a4-46b176cb3161 00:25:04.771 17:25:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:04.771 17:25:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:25:04.771 17:25:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:25:04.771 17:25:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d2313c66-c969-4bdd-a4a4-46b176cb3161 00:25:04.771 17:25:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:04.771 { 00:25:04.771 "name": "d2313c66-c969-4bdd-a4a4-46b176cb3161", 00:25:04.771 "aliases": [ 00:25:04.771 "lvs/nvme0n1p0" 00:25:04.771 ], 00:25:04.771 "product_name": "Logical Volume", 00:25:04.771 "block_size": 4096, 00:25:04.771 "num_blocks": 26476544, 00:25:04.771 "uuid": "d2313c66-c969-4bdd-a4a4-46b176cb3161", 00:25:04.771 "assigned_rate_limits": { 00:25:04.771 "rw_ios_per_sec": 0, 00:25:04.771 "rw_mbytes_per_sec": 0, 00:25:04.771 "r_mbytes_per_sec": 0, 00:25:04.771 "w_mbytes_per_sec": 0 00:25:04.771 }, 00:25:04.771 "claimed": false, 00:25:04.771 "zoned": false, 00:25:04.771 "supported_io_types": { 00:25:04.771 "read": true, 00:25:04.771 "write": true, 00:25:04.771 "unmap": true, 00:25:04.771 "flush": false, 00:25:04.771 "reset": true, 00:25:04.771 "nvme_admin": false, 00:25:04.771 "nvme_io": false, 00:25:04.771 "nvme_io_md": false, 00:25:04.771 "write_zeroes": true, 00:25:04.771 "zcopy": false, 00:25:04.771 "get_zone_info": false, 00:25:04.771 "zone_management": false, 00:25:04.771 "zone_append": false, 00:25:04.771 "compare": false, 00:25:04.771 "compare_and_write": false, 00:25:04.771 "abort": false, 00:25:04.771 "seek_hole": true, 00:25:04.771 "seek_data": true, 00:25:04.771 "copy": false, 00:25:04.771 "nvme_iov_md": false 00:25:04.771 }, 00:25:04.771 "driver_specific": { 00:25:04.771 "lvol": { 00:25:04.771 "lvol_store_uuid": "1103dc16-f1f5-422a-a8cc-e2e1d7ddf9cd", 00:25:04.771 "base_bdev": "nvme0n1", 00:25:04.771 "thin_provision": true, 00:25:04.771 "num_allocated_clusters": 0, 00:25:04.771 "snapshot": false, 00:25:04.771 "clone": false, 00:25:04.771 "esnap_clone": false 00:25:04.771 } 00:25:04.771 } 00:25:04.771 } 00:25:04.771 ]' 00:25:04.771 17:25:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:04.771 17:25:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:25:04.771 17:25:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:05.031 17:25:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:05.031 17:25:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:05.031 17:25:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:25:05.031 17:25:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:25:05.031 17:25:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d2313c66-c969-4bdd-a4a4-46b176cb3161 -c nvc0n1p0 --l2p_dram_limit 20 00:25:05.031 [2024-07-24 17:25:51.242368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.031 [2024-07-24 17:25:51.242439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:05.031 [2024-07-24 17:25:51.242463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:05.031 [2024-07-24 17:25:51.242474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.031 [2024-07-24 17:25:51.242548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.031 [2024-07-24 17:25:51.242564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:05.031 [2024-07-24 17:25:51.242581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:05.031 [2024-07-24 17:25:51.242591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.031 [2024-07-24 17:25:51.242618] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:05.031 [2024-07-24 17:25:51.243768] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:05.031 [2024-07-24 17:25:51.243823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.031 [2024-07-24 17:25:51.243836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:05.031 [2024-07-24 17:25:51.243851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.208 ms 00:25:05.031 [2024-07-24 17:25:51.243862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.031 [2024-07-24 17:25:51.243992] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 863803a2-da36-4822-9653-169b59fb4a39 00:25:05.031 [2024-07-24 17:25:51.245963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.031 [2024-07-24 17:25:51.246031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:05.031 [2024-07-24 17:25:51.246065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:05.031 [2024-07-24 17:25:51.246078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.031 [2024-07-24 17:25:51.255904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.031 [2024-07-24 17:25:51.255983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:05.031 [2024-07-24 17:25:51.256000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.737 ms 00:25:05.031 [2024-07-24 17:25:51.256013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.031 [2024-07-24 17:25:51.256130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.031 [2024-07-24 17:25:51.256153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:05.031 [2024-07-24 17:25:51.256165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:25:05.031 [2024-07-24 17:25:51.256180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.031 [2024-07-24 17:25:51.256283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.031 [2024-07-24 17:25:51.256303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:05.031 [2024-07-24 17:25:51.256321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:05.031 [2024-07-24 17:25:51.256333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.031 [2024-07-24 17:25:51.256362] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:05.031 [2024-07-24 17:25:51.261203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.031 [2024-07-24 17:25:51.261255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:05.031 [2024-07-24 17:25:51.261290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.846 ms 00:25:05.031 [2024-07-24 17:25:51.261301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.031 [2024-07-24 17:25:51.261349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.031 [2024-07-24 17:25:51.261363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:05.032 [2024-07-24 17:25:51.261377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:05.032 [2024-07-24 17:25:51.261387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.032 [2024-07-24 17:25:51.261429] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:05.032 [2024-07-24 17:25:51.261610] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:05.032 [2024-07-24 17:25:51.261634] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:05.032 [2024-07-24 17:25:51.261648] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:05.032 [2024-07-24 17:25:51.261665] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:05.032 [2024-07-24 17:25:51.261677] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:05.032 [2024-07-24 17:25:51.261709] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:05.032 [2024-07-24 17:25:51.261723] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:05.032 [2024-07-24 17:25:51.261739] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:05.032 [2024-07-24 17:25:51.261750] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:05.032 [2024-07-24 17:25:51.261764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.032 [2024-07-24 17:25:51.261775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:05.032 [2024-07-24 17:25:51.261792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:25:05.032 [2024-07-24 17:25:51.261804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.032 [2024-07-24 17:25:51.261905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.032 [2024-07-24 17:25:51.261918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:05.032 [2024-07-24 17:25:51.261932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:25:05.032 [2024-07-24 17:25:51.261942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.032 [2024-07-24 17:25:51.262037] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:05.032 [2024-07-24 17:25:51.262052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:05.032 [2024-07-24 17:25:51.262066] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:05.032 [2024-07-24 17:25:51.262096] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:05.032 [2024-07-24 17:25:51.262109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:05.032 [2024-07-24 17:25:51.262119] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:05.032 [2024-07-24 17:25:51.262131] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:05.032 [2024-07-24 17:25:51.262141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:05.032 [2024-07-24 17:25:51.262153] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:05.032 [2024-07-24 17:25:51.262163] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:05.032 [2024-07-24 17:25:51.262175] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:05.032 [2024-07-24 17:25:51.262185] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:05.032 [2024-07-24 17:25:51.262196] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:05.032 [2024-07-24 17:25:51.262206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:05.032 [2024-07-24 17:25:51.262220] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:05.032 [2024-07-24 17:25:51.262230] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:05.032 [2024-07-24 17:25:51.262245] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:05.032 [2024-07-24 17:25:51.262255] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:05.032 [2024-07-24 17:25:51.262293] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:05.032 [2024-07-24 17:25:51.262306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:05.032 [2024-07-24 17:25:51.262319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:05.032 [2024-07-24 17:25:51.262333] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:05.032 [2024-07-24 17:25:51.262347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:05.032 [2024-07-24 17:25:51.262357] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:05.032 [2024-07-24 17:25:51.262369] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:05.032 [2024-07-24 17:25:51.262379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:05.032 [2024-07-24 17:25:51.262391] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:05.032 [2024-07-24 17:25:51.262401] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:05.032 [2024-07-24 17:25:51.262413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:05.032 [2024-07-24 17:25:51.262424] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:05.032 [2024-07-24 17:25:51.262435] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:05.032 [2024-07-24 17:25:51.262445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:05.032 [2024-07-24 17:25:51.262461] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:05.032 [2024-07-24 17:25:51.262485] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:05.032 [2024-07-24 17:25:51.262498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:05.032 [2024-07-24 17:25:51.262507] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:05.032 [2024-07-24 17:25:51.262535] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:05.032 [2024-07-24 17:25:51.262546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:05.032 [2024-07-24 17:25:51.262576] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:05.032 [2024-07-24 17:25:51.262586] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:05.032 [2024-07-24 17:25:51.262599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:05.032 [2024-07-24 17:25:51.262609] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:05.032 [2024-07-24 17:25:51.262622] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:05.032 [2024-07-24 17:25:51.262648] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:05.032 [2024-07-24 17:25:51.262662] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:05.032 [2024-07-24 17:25:51.262674] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:05.032 [2024-07-24 17:25:51.262688] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:05.032 [2024-07-24 17:25:51.262700] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:05.032 [2024-07-24 17:25:51.262731] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:05.032 [2024-07-24 17:25:51.262749] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:05.032 [2024-07-24 17:25:51.262778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:05.032 [2024-07-24 17:25:51.262789] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:05.032 [2024-07-24 17:25:51.262802] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:05.032 [2024-07-24 17:25:51.262818] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:05.032 [2024-07-24 17:25:51.262835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:05.032 [2024-07-24 17:25:51.262848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:05.032 [2024-07-24 17:25:51.262861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:05.032 [2024-07-24 17:25:51.262887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:05.032 [2024-07-24 17:25:51.262901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:05.032 [2024-07-24 17:25:51.262913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:05.032 [2024-07-24 17:25:51.262955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:05.032 [2024-07-24 17:25:51.262971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:05.032 [2024-07-24 17:25:51.262985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:05.032 [2024-07-24 17:25:51.262997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:05.032 [2024-07-24 17:25:51.263015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:05.032 [2024-07-24 17:25:51.263028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:05.032 [2024-07-24 17:25:51.263042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:05.032 [2024-07-24 17:25:51.263053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:05.032 [2024-07-24 17:25:51.263068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:05.032 [2024-07-24 17:25:51.263080] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:05.032 [2024-07-24 17:25:51.263095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:05.032 [2024-07-24 17:25:51.263108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:05.032 [2024-07-24 17:25:51.263122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:05.032 [2024-07-24 17:25:51.263133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:05.032 [2024-07-24 17:25:51.263148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:05.032 [2024-07-24 17:25:51.263161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.032 [2024-07-24 17:25:51.263179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:05.033 [2024-07-24 17:25:51.263191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.187 ms 00:25:05.033 [2024-07-24 17:25:51.263205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.033 [2024-07-24 17:25:51.263268] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:05.033 [2024-07-24 17:25:51.263290] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:08.318 [2024-07-24 17:25:54.045924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.046013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:08.318 [2024-07-24 17:25:54.046053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2782.669 ms 00:25:08.318 [2024-07-24 17:25:54.046067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.089751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.089844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:08.318 [2024-07-24 17:25:54.089865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.383 ms 00:25:08.318 [2024-07-24 17:25:54.089879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.090067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.090090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:08.318 [2024-07-24 17:25:54.090104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:25:08.318 [2024-07-24 17:25:54.090150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.126577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.126684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:08.318 [2024-07-24 17:25:54.126708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.376 ms 00:25:08.318 [2024-07-24 17:25:54.126722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.126768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.126786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:08.318 [2024-07-24 17:25:54.126798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:08.318 [2024-07-24 17:25:54.126810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.127488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.127510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:08.318 [2024-07-24 17:25:54.127524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:25:08.318 [2024-07-24 17:25:54.127537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.127709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.127732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:08.318 [2024-07-24 17:25:54.127748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:25:08.318 [2024-07-24 17:25:54.127764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.143870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.143942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:08.318 [2024-07-24 17:25:54.143959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.084 ms 00:25:08.318 [2024-07-24 17:25:54.143971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.156175] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:25:08.318 [2024-07-24 17:25:54.163516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.163567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:08.318 [2024-07-24 17:25:54.163601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.460 ms 00:25:08.318 [2024-07-24 17:25:54.163620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.233229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.233327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:08.318 [2024-07-24 17:25:54.233366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.575 ms 00:25:08.318 [2024-07-24 17:25:54.233377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.233588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.233606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:08.318 [2024-07-24 17:25:54.233640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:25:08.318 [2024-07-24 17:25:54.233667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.262222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.262276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:08.318 [2024-07-24 17:25:54.262311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.444 ms 00:25:08.318 [2024-07-24 17:25:54.262323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.288815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.288868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:08.318 [2024-07-24 17:25:54.288904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.447 ms 00:25:08.318 [2024-07-24 17:25:54.288914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.289764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.289806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:08.318 [2024-07-24 17:25:54.289838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:25:08.318 [2024-07-24 17:25:54.289848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.376900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.376985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:08.318 [2024-07-24 17:25:54.377029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.005 ms 00:25:08.318 [2024-07-24 17:25:54.377041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.405917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.405974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:08.318 [2024-07-24 17:25:54.406011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.825 ms 00:25:08.318 [2024-07-24 17:25:54.406026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.432537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.432592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:08.318 [2024-07-24 17:25:54.432626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.449 ms 00:25:08.318 [2024-07-24 17:25:54.432637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.459617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.459688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:08.318 [2024-07-24 17:25:54.459724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.911 ms 00:25:08.318 [2024-07-24 17:25:54.459735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.459785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.459802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:08.318 [2024-07-24 17:25:54.459820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:08.318 [2024-07-24 17:25:54.459830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.459971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.318 [2024-07-24 17:25:54.459990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:08.318 [2024-07-24 17:25:54.460004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:08.318 [2024-07-24 17:25:54.460019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.318 [2024-07-24 17:25:54.461335] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3218.437 ms, result 0 00:25:08.318 { 00:25:08.318 "name": "ftl0", 00:25:08.318 "uuid": "863803a2-da36-4822-9653-169b59fb4a39" 00:25:08.318 } 00:25:08.318 17:25:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:25:08.318 17:25:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:25:08.319 17:25:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:25:08.577 17:25:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:25:08.835 [2024-07-24 17:25:54.885474] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:25:08.835 I/O size of 69632 is greater than zero copy threshold (65536). 00:25:08.835 Zero copy mechanism will not be used. 00:25:08.835 Running I/O for 4 seconds... 00:25:13.021 00:25:13.021 Latency(us) 00:25:13.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.021 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:25:13.021 ftl0 : 4.00 1642.86 109.10 0.00 0.00 642.22 256.93 1027.72 00:25:13.021 =================================================================================================================== 00:25:13.021 Total : 1642.86 109.10 0.00 0.00 642.22 256.93 1027.72 00:25:13.021 [2024-07-24 17:25:58.895593] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:25:13.021 0 00:25:13.021 17:25:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:25:13.021 [2024-07-24 17:25:59.033452] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:25:13.021 Running I/O for 4 seconds... 00:25:17.206 00:25:17.206 Latency(us) 00:25:17.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.207 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:25:17.207 ftl0 : 4.02 7621.15 29.77 0.00 0.00 16754.20 314.65 30742.34 00:25:17.207 =================================================================================================================== 00:25:17.207 Total : 7621.15 29.77 0.00 0.00 16754.20 0.00 30742.34 00:25:17.207 [2024-07-24 17:26:03.060971] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:25:17.207 0 00:25:17.207 17:26:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:25:17.207 [2024-07-24 17:26:03.193744] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:25:17.207 Running I/O for 4 seconds... 00:25:21.396 00:25:21.396 Latency(us) 00:25:21.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.397 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:21.397 Verification LBA range: start 0x0 length 0x1400000 00:25:21.397 ftl0 : 4.01 5918.18 23.12 0.00 0.00 21549.88 348.16 25737.77 00:25:21.397 =================================================================================================================== 00:25:21.397 Total : 5918.18 23.12 0.00 0.00 21549.88 0.00 25737.77 00:25:21.397 [2024-07-24 17:26:07.223458] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:25:21.397 0 00:25:21.397 17:26:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:25:21.397 [2024-07-24 17:26:07.492710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.397 [2024-07-24 17:26:07.492797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:21.397 [2024-07-24 17:26:07.492839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:21.397 [2024-07-24 17:26:07.492854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.397 [2024-07-24 17:26:07.492890] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:21.397 [2024-07-24 17:26:07.496234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.397 [2024-07-24 17:26:07.496288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:21.397 [2024-07-24 17:26:07.496319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.322 ms 00:25:21.397 [2024-07-24 17:26:07.496334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.397 [2024-07-24 17:26:07.498282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.397 [2024-07-24 17:26:07.498343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:21.397 [2024-07-24 17:26:07.498390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.922 ms 00:25:21.397 [2024-07-24 17:26:07.498403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.657 [2024-07-24 17:26:07.687845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.657 [2024-07-24 17:26:07.687981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:21.657 [2024-07-24 17:26:07.688017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 189.418 ms 00:25:21.657 [2024-07-24 17:26:07.688035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.657 [2024-07-24 17:26:07.693616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.657 [2024-07-24 17:26:07.693711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:21.657 [2024-07-24 17:26:07.693728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.537 ms 00:25:21.657 [2024-07-24 17:26:07.693741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.657 [2024-07-24 17:26:07.719699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.657 [2024-07-24 17:26:07.719782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:21.657 [2024-07-24 17:26:07.719799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.871 ms 00:25:21.657 [2024-07-24 17:26:07.719812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.657 [2024-07-24 17:26:07.736501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.657 [2024-07-24 17:26:07.736582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:21.657 [2024-07-24 17:26:07.736617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.648 ms 00:25:21.657 [2024-07-24 17:26:07.736631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.657 [2024-07-24 17:26:07.736794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.657 [2024-07-24 17:26:07.736819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:21.657 [2024-07-24 17:26:07.736863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:25:21.657 [2024-07-24 17:26:07.736894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.657 [2024-07-24 17:26:07.762192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.657 [2024-07-24 17:26:07.762270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:21.657 [2024-07-24 17:26:07.762286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.277 ms 00:25:21.657 [2024-07-24 17:26:07.762298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.657 [2024-07-24 17:26:07.787354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.657 [2024-07-24 17:26:07.787435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:21.657 [2024-07-24 17:26:07.787466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.017 ms 00:25:21.657 [2024-07-24 17:26:07.787479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.657 [2024-07-24 17:26:07.813064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.657 [2024-07-24 17:26:07.813152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:21.657 [2024-07-24 17:26:07.813168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.543 ms 00:25:21.657 [2024-07-24 17:26:07.813204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.657 [2024-07-24 17:26:07.838225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.657 [2024-07-24 17:26:07.838301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:21.657 [2024-07-24 17:26:07.838316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.921 ms 00:25:21.657 [2024-07-24 17:26:07.838331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.657 [2024-07-24 17:26:07.838369] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:21.657 [2024-07-24 17:26:07.838394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:21.657 [2024-07-24 17:26:07.838407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:21.657 [2024-07-24 17:26:07.838421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:21.657 [2024-07-24 17:26:07.838431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:21.657 [2024-07-24 17:26:07.838443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:21.657 [2024-07-24 17:26:07.838453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:21.657 [2024-07-24 17:26:07.838466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:21.657 [2024-07-24 17:26:07.838477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.838993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:21.658 [2024-07-24 17:26:07.839519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:21.659 [2024-07-24 17:26:07.839827] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:21.659 [2024-07-24 17:26:07.839838] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 863803a2-da36-4822-9653-169b59fb4a39 00:25:21.659 [2024-07-24 17:26:07.839852] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:21.659 [2024-07-24 17:26:07.839863] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:21.659 [2024-07-24 17:26:07.839875] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:21.659 [2024-07-24 17:26:07.839889] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:21.659 [2024-07-24 17:26:07.839903] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:21.659 [2024-07-24 17:26:07.839914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:21.659 [2024-07-24 17:26:07.839927] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:21.659 [2024-07-24 17:26:07.839937] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:21.659 [2024-07-24 17:26:07.839951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:21.659 [2024-07-24 17:26:07.839962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.659 [2024-07-24 17:26:07.839976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:21.659 [2024-07-24 17:26:07.839988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.594 ms 00:25:21.659 [2024-07-24 17:26:07.840001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.659 [2024-07-24 17:26:07.854807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.659 [2024-07-24 17:26:07.854871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:21.659 [2024-07-24 17:26:07.854902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.764 ms 00:25:21.659 [2024-07-24 17:26:07.854915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.659 [2024-07-24 17:26:07.855483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.659 [2024-07-24 17:26:07.855519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:21.659 [2024-07-24 17:26:07.855534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:25:21.659 [2024-07-24 17:26:07.855548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.659 [2024-07-24 17:26:07.893429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.659 [2024-07-24 17:26:07.893536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:21.659 [2024-07-24 17:26:07.893554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.659 [2024-07-24 17:26:07.893570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.659 [2024-07-24 17:26:07.893671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.659 [2024-07-24 17:26:07.893700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:21.659 [2024-07-24 17:26:07.893713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.659 [2024-07-24 17:26:07.893726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.659 [2024-07-24 17:26:07.893910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.659 [2024-07-24 17:26:07.893938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:21.659 [2024-07-24 17:26:07.893951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.659 [2024-07-24 17:26:07.893965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.659 [2024-07-24 17:26:07.893988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.659 [2024-07-24 17:26:07.894005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:21.659 [2024-07-24 17:26:07.894017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.659 [2024-07-24 17:26:07.894031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.919 [2024-07-24 17:26:07.980013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.919 [2024-07-24 17:26:07.980113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:21.919 [2024-07-24 17:26:07.980147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.919 [2024-07-24 17:26:07.980163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.919 [2024-07-24 17:26:08.050728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.919 [2024-07-24 17:26:08.050838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:21.919 [2024-07-24 17:26:08.050857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.919 [2024-07-24 17:26:08.050871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.919 [2024-07-24 17:26:08.050997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.919 [2024-07-24 17:26:08.051020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:21.919 [2024-07-24 17:26:08.051035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.919 [2024-07-24 17:26:08.051048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.919 [2024-07-24 17:26:08.051123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.919 [2024-07-24 17:26:08.051143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:21.919 [2024-07-24 17:26:08.051155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.919 [2024-07-24 17:26:08.051183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.919 [2024-07-24 17:26:08.051334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.919 [2024-07-24 17:26:08.051357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:21.919 [2024-07-24 17:26:08.051370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.919 [2024-07-24 17:26:08.051389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.919 [2024-07-24 17:26:08.051436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.919 [2024-07-24 17:26:08.051472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:21.919 [2024-07-24 17:26:08.051485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.919 [2024-07-24 17:26:08.051498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.919 [2024-07-24 17:26:08.051543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.919 [2024-07-24 17:26:08.051560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:21.919 [2024-07-24 17:26:08.051572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.919 [2024-07-24 17:26:08.051586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.919 [2024-07-24 17:26:08.051641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.919 [2024-07-24 17:26:08.051703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:21.919 [2024-07-24 17:26:08.051716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.919 [2024-07-24 17:26:08.051730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.919 [2024-07-24 17:26:08.051884] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 559.168 ms, result 0 00:25:21.919 true 00:25:21.919 17:26:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 78907 00:25:21.919 17:26:08 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 78907 ']' 00:25:21.919 17:26:08 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 78907 00:25:21.919 17:26:08 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:25:21.919 17:26:08 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:21.919 17:26:08 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78907 00:25:21.919 killing process with pid 78907 00:25:21.919 Received shutdown signal, test time was about 4.000000 seconds 00:25:21.919 00:25:21.919 Latency(us) 00:25:21.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.919 =================================================================================================================== 00:25:21.919 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:21.919 17:26:08 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:21.919 17:26:08 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:21.919 17:26:08 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78907' 00:25:21.919 17:26:08 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 78907 00:25:21.919 17:26:08 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 78907 00:25:26.129 17:26:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:25:26.129 17:26:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:25:26.129 17:26:11 ftl.ftl_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:26.129 17:26:11 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:26.129 17:26:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:25:26.129 Remove shared memory files 00:25:26.129 17:26:11 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:26.129 17:26:11 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:25:26.129 17:26:11 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:25:26.129 17:26:11 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:25:26.129 17:26:11 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:26.129 17:26:11 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:25:26.129 ************************************ 00:25:26.129 END TEST ftl_bdevperf 00:25:26.129 ************************************ 00:25:26.129 00:25:26.129 real 0m25.113s 00:25:26.129 user 0m28.408s 00:25:26.129 sys 0m1.199s 00:25:26.129 17:26:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:26.129 17:26:11 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:26.129 17:26:11 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:25:26.129 17:26:11 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:25:26.129 17:26:11 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:26.129 17:26:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:26.129 ************************************ 00:25:26.129 START TEST ftl_trim 00:25:26.129 ************************************ 00:25:26.130 17:26:11 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:25:26.130 * Looking for test storage... 00:25:26.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=79264 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:25:26.130 17:26:11 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 79264 00:25:26.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.130 17:26:11 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 79264 ']' 00:25:26.130 17:26:11 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.130 17:26:11 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:26.130 17:26:11 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.130 17:26:11 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:26.130 17:26:11 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:26.130 [2024-07-24 17:26:12.034854] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:25:26.130 [2024-07-24 17:26:12.035205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79264 ] 00:25:26.130 [2024-07-24 17:26:12.199270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:26.397 [2024-07-24 17:26:12.410331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.397 [2024-07-24 17:26:12.410453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.397 [2024-07-24 17:26:12.410475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:26.965 17:26:13 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:26.965 17:26:13 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:25:26.965 17:26:13 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:26.965 17:26:13 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:25:26.965 17:26:13 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:26.965 17:26:13 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:25:26.965 17:26:13 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:25:26.965 17:26:13 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:27.533 17:26:13 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:27.533 17:26:13 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:25:27.533 17:26:13 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:27.533 17:26:13 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:25:27.533 17:26:13 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:27.533 17:26:13 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:25:27.533 17:26:13 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:25:27.533 17:26:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:27.533 17:26:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:27.533 { 00:25:27.533 "name": "nvme0n1", 00:25:27.533 "aliases": [ 00:25:27.533 "881b056d-8802-4042-b7dd-e7245fd263b1" 00:25:27.533 ], 00:25:27.533 "product_name": "NVMe disk", 00:25:27.533 "block_size": 4096, 00:25:27.533 "num_blocks": 1310720, 00:25:27.533 "uuid": "881b056d-8802-4042-b7dd-e7245fd263b1", 00:25:27.533 "assigned_rate_limits": { 00:25:27.533 "rw_ios_per_sec": 0, 00:25:27.533 "rw_mbytes_per_sec": 0, 00:25:27.533 "r_mbytes_per_sec": 0, 00:25:27.533 "w_mbytes_per_sec": 0 00:25:27.533 }, 00:25:27.533 "claimed": true, 00:25:27.533 "claim_type": "read_many_write_one", 00:25:27.533 "zoned": false, 00:25:27.533 "supported_io_types": { 00:25:27.533 "read": true, 00:25:27.533 "write": true, 00:25:27.533 "unmap": true, 00:25:27.533 "flush": true, 00:25:27.533 "reset": true, 00:25:27.533 "nvme_admin": true, 00:25:27.533 "nvme_io": true, 00:25:27.533 "nvme_io_md": false, 00:25:27.533 "write_zeroes": true, 00:25:27.533 "zcopy": false, 00:25:27.533 "get_zone_info": false, 00:25:27.533 "zone_management": false, 00:25:27.533 "zone_append": false, 00:25:27.533 "compare": true, 00:25:27.533 "compare_and_write": false, 00:25:27.533 "abort": true, 00:25:27.533 "seek_hole": false, 00:25:27.533 "seek_data": false, 00:25:27.533 "copy": true, 00:25:27.533 "nvme_iov_md": false 00:25:27.533 }, 00:25:27.533 "driver_specific": { 00:25:27.533 "nvme": [ 00:25:27.533 { 00:25:27.533 "pci_address": "0000:00:11.0", 00:25:27.533 "trid": { 00:25:27.533 "trtype": "PCIe", 00:25:27.533 "traddr": "0000:00:11.0" 00:25:27.533 }, 00:25:27.533 "ctrlr_data": { 00:25:27.533 "cntlid": 0, 00:25:27.533 "vendor_id": "0x1b36", 00:25:27.533 "model_number": "QEMU NVMe Ctrl", 00:25:27.533 "serial_number": "12341", 00:25:27.533 "firmware_revision": "8.0.0", 00:25:27.533 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:27.533 "oacs": { 00:25:27.533 "security": 0, 00:25:27.533 "format": 1, 00:25:27.533 "firmware": 0, 00:25:27.533 "ns_manage": 1 00:25:27.533 }, 00:25:27.533 "multi_ctrlr": false, 00:25:27.533 "ana_reporting": false 00:25:27.533 }, 00:25:27.533 "vs": { 00:25:27.533 "nvme_version": "1.4" 00:25:27.533 }, 00:25:27.533 "ns_data": { 00:25:27.533 "id": 1, 00:25:27.533 "can_share": false 00:25:27.533 } 00:25:27.533 } 00:25:27.533 ], 00:25:27.533 "mp_policy": "active_passive" 00:25:27.533 } 00:25:27.533 } 00:25:27.533 ]' 00:25:27.533 17:26:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:27.792 17:26:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:25:27.792 17:26:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:27.792 17:26:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:25:27.792 17:26:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:25:27.792 17:26:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:25:27.792 17:26:13 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:25:27.792 17:26:13 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:27.792 17:26:13 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:25:27.792 17:26:13 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:27.792 17:26:13 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:28.051 17:26:14 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=1103dc16-f1f5-422a-a8cc-e2e1d7ddf9cd 00:25:28.051 17:26:14 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:25:28.051 17:26:14 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1103dc16-f1f5-422a-a8cc-e2e1d7ddf9cd 00:25:28.309 17:26:14 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:28.568 17:26:14 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=503f0843-e8ee-4538-802c-a516385d8a08 00:25:28.568 17:26:14 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 503f0843-e8ee-4538-802c-a516385d8a08 00:25:28.568 17:26:14 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=52812a79-4471-4cce-8fe5-64ffc3de8bd9 00:25:28.568 17:26:14 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 52812a79-4471-4cce-8fe5-64ffc3de8bd9 00:25:28.568 17:26:14 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:25:28.568 17:26:14 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:28.568 17:26:14 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=52812a79-4471-4cce-8fe5-64ffc3de8bd9 00:25:28.568 17:26:14 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:25:28.568 17:26:14 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 52812a79-4471-4cce-8fe5-64ffc3de8bd9 00:25:28.568 17:26:14 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=52812a79-4471-4cce-8fe5-64ffc3de8bd9 00:25:28.568 17:26:14 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:28.568 17:26:14 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:25:28.568 17:26:14 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:25:28.568 17:26:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 52812a79-4471-4cce-8fe5-64ffc3de8bd9 00:25:28.827 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:28.827 { 00:25:28.827 "name": "52812a79-4471-4cce-8fe5-64ffc3de8bd9", 00:25:28.827 "aliases": [ 00:25:28.827 "lvs/nvme0n1p0" 00:25:28.827 ], 00:25:28.827 "product_name": "Logical Volume", 00:25:28.827 "block_size": 4096, 00:25:28.827 "num_blocks": 26476544, 00:25:28.827 "uuid": "52812a79-4471-4cce-8fe5-64ffc3de8bd9", 00:25:28.827 "assigned_rate_limits": { 00:25:28.827 "rw_ios_per_sec": 0, 00:25:28.827 "rw_mbytes_per_sec": 0, 00:25:28.827 "r_mbytes_per_sec": 0, 00:25:28.827 "w_mbytes_per_sec": 0 00:25:28.827 }, 00:25:28.827 "claimed": false, 00:25:28.827 "zoned": false, 00:25:28.827 "supported_io_types": { 00:25:28.827 "read": true, 00:25:28.827 "write": true, 00:25:28.827 "unmap": true, 00:25:28.827 "flush": false, 00:25:28.827 "reset": true, 00:25:28.827 "nvme_admin": false, 00:25:28.827 "nvme_io": false, 00:25:28.827 "nvme_io_md": false, 00:25:28.827 "write_zeroes": true, 00:25:28.827 "zcopy": false, 00:25:28.827 "get_zone_info": false, 00:25:28.827 "zone_management": false, 00:25:28.827 "zone_append": false, 00:25:28.827 "compare": false, 00:25:28.827 "compare_and_write": false, 00:25:28.827 "abort": false, 00:25:28.827 "seek_hole": true, 00:25:28.827 "seek_data": true, 00:25:28.827 "copy": false, 00:25:28.827 "nvme_iov_md": false 00:25:28.827 }, 00:25:28.827 "driver_specific": { 00:25:28.827 "lvol": { 00:25:28.827 "lvol_store_uuid": "503f0843-e8ee-4538-802c-a516385d8a08", 00:25:28.827 "base_bdev": "nvme0n1", 00:25:28.827 "thin_provision": true, 00:25:28.827 "num_allocated_clusters": 0, 00:25:28.827 "snapshot": false, 00:25:28.827 "clone": false, 00:25:28.827 "esnap_clone": false 00:25:28.827 } 00:25:28.827 } 00:25:28.827 } 00:25:28.827 ]' 00:25:28.827 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:29.085 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:25:29.085 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:29.085 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:29.085 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:29.085 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:25:29.085 17:26:15 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:25:29.085 17:26:15 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:25:29.085 17:26:15 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:29.344 17:26:15 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:29.344 17:26:15 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:29.344 17:26:15 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 52812a79-4471-4cce-8fe5-64ffc3de8bd9 00:25:29.344 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=52812a79-4471-4cce-8fe5-64ffc3de8bd9 00:25:29.344 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:29.344 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:25:29.344 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:25:29.344 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 52812a79-4471-4cce-8fe5-64ffc3de8bd9 00:25:29.602 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:29.602 { 00:25:29.602 "name": "52812a79-4471-4cce-8fe5-64ffc3de8bd9", 00:25:29.602 "aliases": [ 00:25:29.602 "lvs/nvme0n1p0" 00:25:29.602 ], 00:25:29.602 "product_name": "Logical Volume", 00:25:29.602 "block_size": 4096, 00:25:29.602 "num_blocks": 26476544, 00:25:29.602 "uuid": "52812a79-4471-4cce-8fe5-64ffc3de8bd9", 00:25:29.602 "assigned_rate_limits": { 00:25:29.602 "rw_ios_per_sec": 0, 00:25:29.602 "rw_mbytes_per_sec": 0, 00:25:29.602 "r_mbytes_per_sec": 0, 00:25:29.602 "w_mbytes_per_sec": 0 00:25:29.602 }, 00:25:29.602 "claimed": false, 00:25:29.602 "zoned": false, 00:25:29.602 "supported_io_types": { 00:25:29.602 "read": true, 00:25:29.602 "write": true, 00:25:29.602 "unmap": true, 00:25:29.602 "flush": false, 00:25:29.602 "reset": true, 00:25:29.602 "nvme_admin": false, 00:25:29.602 "nvme_io": false, 00:25:29.602 "nvme_io_md": false, 00:25:29.602 "write_zeroes": true, 00:25:29.602 "zcopy": false, 00:25:29.602 "get_zone_info": false, 00:25:29.602 "zone_management": false, 00:25:29.602 "zone_append": false, 00:25:29.602 "compare": false, 00:25:29.602 "compare_and_write": false, 00:25:29.602 "abort": false, 00:25:29.602 "seek_hole": true, 00:25:29.602 "seek_data": true, 00:25:29.602 "copy": false, 00:25:29.602 "nvme_iov_md": false 00:25:29.602 }, 00:25:29.602 "driver_specific": { 00:25:29.602 "lvol": { 00:25:29.602 "lvol_store_uuid": "503f0843-e8ee-4538-802c-a516385d8a08", 00:25:29.602 "base_bdev": "nvme0n1", 00:25:29.602 "thin_provision": true, 00:25:29.602 "num_allocated_clusters": 0, 00:25:29.602 "snapshot": false, 00:25:29.602 "clone": false, 00:25:29.602 "esnap_clone": false 00:25:29.602 } 00:25:29.602 } 00:25:29.602 } 00:25:29.602 ]' 00:25:29.602 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:29.602 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:25:29.602 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:29.602 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:29.603 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:29.603 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:25:29.603 17:26:15 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:25:29.603 17:26:15 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:29.861 17:26:15 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:25:29.861 17:26:15 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:25:29.861 17:26:15 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 52812a79-4471-4cce-8fe5-64ffc3de8bd9 00:25:29.861 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=52812a79-4471-4cce-8fe5-64ffc3de8bd9 00:25:29.861 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:29.861 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:25:29.861 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:25:29.861 17:26:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 52812a79-4471-4cce-8fe5-64ffc3de8bd9 00:25:30.120 17:26:16 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:30.120 { 00:25:30.120 "name": "52812a79-4471-4cce-8fe5-64ffc3de8bd9", 00:25:30.120 "aliases": [ 00:25:30.120 "lvs/nvme0n1p0" 00:25:30.120 ], 00:25:30.120 "product_name": "Logical Volume", 00:25:30.120 "block_size": 4096, 00:25:30.120 "num_blocks": 26476544, 00:25:30.120 "uuid": "52812a79-4471-4cce-8fe5-64ffc3de8bd9", 00:25:30.120 "assigned_rate_limits": { 00:25:30.120 "rw_ios_per_sec": 0, 00:25:30.120 "rw_mbytes_per_sec": 0, 00:25:30.120 "r_mbytes_per_sec": 0, 00:25:30.120 "w_mbytes_per_sec": 0 00:25:30.120 }, 00:25:30.120 "claimed": false, 00:25:30.120 "zoned": false, 00:25:30.120 "supported_io_types": { 00:25:30.120 "read": true, 00:25:30.120 "write": true, 00:25:30.120 "unmap": true, 00:25:30.120 "flush": false, 00:25:30.120 "reset": true, 00:25:30.120 "nvme_admin": false, 00:25:30.120 "nvme_io": false, 00:25:30.120 "nvme_io_md": false, 00:25:30.120 "write_zeroes": true, 00:25:30.120 "zcopy": false, 00:25:30.120 "get_zone_info": false, 00:25:30.120 "zone_management": false, 00:25:30.120 "zone_append": false, 00:25:30.120 "compare": false, 00:25:30.120 "compare_and_write": false, 00:25:30.120 "abort": false, 00:25:30.120 "seek_hole": true, 00:25:30.120 "seek_data": true, 00:25:30.120 "copy": false, 00:25:30.120 "nvme_iov_md": false 00:25:30.120 }, 00:25:30.120 "driver_specific": { 00:25:30.120 "lvol": { 00:25:30.120 "lvol_store_uuid": "503f0843-e8ee-4538-802c-a516385d8a08", 00:25:30.120 "base_bdev": "nvme0n1", 00:25:30.120 "thin_provision": true, 00:25:30.120 "num_allocated_clusters": 0, 00:25:30.120 "snapshot": false, 00:25:30.120 "clone": false, 00:25:30.120 "esnap_clone": false 00:25:30.120 } 00:25:30.120 } 00:25:30.120 } 00:25:30.120 ]' 00:25:30.120 17:26:16 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:30.120 17:26:16 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:25:30.120 17:26:16 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:30.120 17:26:16 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:30.120 17:26:16 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:30.120 17:26:16 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:25:30.120 17:26:16 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:25:30.120 17:26:16 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 52812a79-4471-4cce-8fe5-64ffc3de8bd9 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:25:30.380 [2024-07-24 17:26:16.517395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.380 [2024-07-24 17:26:16.517459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:30.380 [2024-07-24 17:26:16.517480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:30.380 [2024-07-24 17:26:16.517494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.380 [2024-07-24 17:26:16.521084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.380 [2024-07-24 17:26:16.521125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:30.380 [2024-07-24 17:26:16.521141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.560 ms 00:25:30.380 [2024-07-24 17:26:16.521154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.380 [2024-07-24 17:26:16.521302] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:30.380 [2024-07-24 17:26:16.522220] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:30.380 [2024-07-24 17:26:16.522250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.380 [2024-07-24 17:26:16.522270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:30.380 [2024-07-24 17:26:16.522283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:25:30.380 [2024-07-24 17:26:16.522296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.380 [2024-07-24 17:26:16.522525] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1ed9e678-873d-4fa8-9ddb-2bec19870f6b 00:25:30.380 [2024-07-24 17:26:16.524681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.380 [2024-07-24 17:26:16.524873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:30.380 [2024-07-24 17:26:16.525029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:30.380 [2024-07-24 17:26:16.525089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.380 [2024-07-24 17:26:16.534905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.380 [2024-07-24 17:26:16.535177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:30.380 [2024-07-24 17:26:16.535313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.524 ms 00:25:30.380 [2024-07-24 17:26:16.535373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.380 [2024-07-24 17:26:16.535736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.380 [2024-07-24 17:26:16.535882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:30.380 [2024-07-24 17:26:16.536004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:25:30.380 [2024-07-24 17:26:16.536131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.380 [2024-07-24 17:26:16.536244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.380 [2024-07-24 17:26:16.536303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:30.380 [2024-07-24 17:26:16.536413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:30.380 [2024-07-24 17:26:16.536470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.380 [2024-07-24 17:26:16.536609] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:30.380 [2024-07-24 17:26:16.541790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.380 [2024-07-24 17:26:16.541969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:30.380 [2024-07-24 17:26:16.541995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.196 ms 00:25:30.380 [2024-07-24 17:26:16.542010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.380 [2024-07-24 17:26:16.542135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.380 [2024-07-24 17:26:16.542163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:30.380 [2024-07-24 17:26:16.542179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:30.380 [2024-07-24 17:26:16.542193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.380 [2024-07-24 17:26:16.542232] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:30.380 [2024-07-24 17:26:16.542410] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:30.380 [2024-07-24 17:26:16.542434] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:30.380 [2024-07-24 17:26:16.542456] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:30.380 [2024-07-24 17:26:16.542471] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:30.380 [2024-07-24 17:26:16.542487] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:30.380 [2024-07-24 17:26:16.542504] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:30.380 [2024-07-24 17:26:16.542518] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:30.380 [2024-07-24 17:26:16.542528] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:30.380 [2024-07-24 17:26:16.542576] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:30.380 [2024-07-24 17:26:16.542589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.380 [2024-07-24 17:26:16.542602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:30.380 [2024-07-24 17:26:16.542613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:25:30.380 [2024-07-24 17:26:16.542626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.380 [2024-07-24 17:26:16.542731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.380 [2024-07-24 17:26:16.542750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:30.380 [2024-07-24 17:26:16.542762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:30.380 [2024-07-24 17:26:16.542795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.380 [2024-07-24 17:26:16.542917] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:30.380 [2024-07-24 17:26:16.542967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:30.380 [2024-07-24 17:26:16.542980] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:30.380 [2024-07-24 17:26:16.542994] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.380 [2024-07-24 17:26:16.543006] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:30.380 [2024-07-24 17:26:16.543018] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:30.380 [2024-07-24 17:26:16.543029] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:30.380 [2024-07-24 17:26:16.543042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:30.380 [2024-07-24 17:26:16.543052] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:30.380 [2024-07-24 17:26:16.543065] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:30.380 [2024-07-24 17:26:16.543076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:30.380 [2024-07-24 17:26:16.543090] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:30.380 [2024-07-24 17:26:16.543102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:30.380 [2024-07-24 17:26:16.543115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:30.380 [2024-07-24 17:26:16.543126] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:30.380 [2024-07-24 17:26:16.543138] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.381 [2024-07-24 17:26:16.543149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:30.381 [2024-07-24 17:26:16.543164] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:30.381 [2024-07-24 17:26:16.543174] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.381 [2024-07-24 17:26:16.543187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:30.381 [2024-07-24 17:26:16.543198] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:30.381 [2024-07-24 17:26:16.543210] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.381 [2024-07-24 17:26:16.543235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:30.381 [2024-07-24 17:26:16.543261] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:30.381 [2024-07-24 17:26:16.543271] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.381 [2024-07-24 17:26:16.543284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:30.381 [2024-07-24 17:26:16.543293] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:30.381 [2024-07-24 17:26:16.543305] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.381 [2024-07-24 17:26:16.543315] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:30.381 [2024-07-24 17:26:16.543327] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:30.381 [2024-07-24 17:26:16.543337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.381 [2024-07-24 17:26:16.543348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:30.381 [2024-07-24 17:26:16.543358] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:30.381 [2024-07-24 17:26:16.543377] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:30.381 [2024-07-24 17:26:16.543386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:30.381 [2024-07-24 17:26:16.543398] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:30.381 [2024-07-24 17:26:16.543408] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:30.381 [2024-07-24 17:26:16.543422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:30.381 [2024-07-24 17:26:16.543432] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:30.381 [2024-07-24 17:26:16.543443] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.381 [2024-07-24 17:26:16.543453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:30.381 [2024-07-24 17:26:16.543466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:30.381 [2024-07-24 17:26:16.543475] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.381 [2024-07-24 17:26:16.543487] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:30.381 [2024-07-24 17:26:16.543498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:30.381 [2024-07-24 17:26:16.543510] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:30.381 [2024-07-24 17:26:16.543521] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.381 [2024-07-24 17:26:16.543537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:30.381 [2024-07-24 17:26:16.543547] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:30.381 [2024-07-24 17:26:16.543561] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:30.381 [2024-07-24 17:26:16.543572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:30.381 [2024-07-24 17:26:16.543586] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:30.381 [2024-07-24 17:26:16.543597] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:30.381 [2024-07-24 17:26:16.543614] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:30.381 [2024-07-24 17:26:16.543632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:30.381 [2024-07-24 17:26:16.543647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:30.381 [2024-07-24 17:26:16.543674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:30.381 [2024-07-24 17:26:16.543687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:30.381 [2024-07-24 17:26:16.543699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:30.381 [2024-07-24 17:26:16.544116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:30.381 [2024-07-24 17:26:16.544253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:30.381 [2024-07-24 17:26:16.544328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:30.381 [2024-07-24 17:26:16.544385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:30.381 [2024-07-24 17:26:16.544517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:30.381 [2024-07-24 17:26:16.544578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:30.381 [2024-07-24 17:26:16.544747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:30.381 [2024-07-24 17:26:16.544809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:30.381 [2024-07-24 17:26:16.544925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:30.381 [2024-07-24 17:26:16.545063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:30.381 [2024-07-24 17:26:16.545187] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:30.381 [2024-07-24 17:26:16.545405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:30.381 [2024-07-24 17:26:16.545546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:30.381 [2024-07-24 17:26:16.545624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:30.381 [2024-07-24 17:26:16.545766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:30.381 [2024-07-24 17:26:16.545897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:30.381 [2024-07-24 17:26:16.546040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.381 [2024-07-24 17:26:16.546060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:30.381 [2024-07-24 17:26:16.546077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.177 ms 00:25:30.381 [2024-07-24 17:26:16.546089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.381 [2024-07-24 17:26:16.546232] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:30.381 [2024-07-24 17:26:16.546252] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:33.665 [2024-07-24 17:26:19.154541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.665 [2024-07-24 17:26:19.154615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:33.665 [2024-07-24 17:26:19.154640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2608.312 ms 00:25:33.665 [2024-07-24 17:26:19.154711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.665 [2024-07-24 17:26:19.191938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.665 [2024-07-24 17:26:19.191999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:33.665 [2024-07-24 17:26:19.192022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.810 ms 00:25:33.665 [2024-07-24 17:26:19.192035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.665 [2024-07-24 17:26:19.192245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.665 [2024-07-24 17:26:19.192358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:33.665 [2024-07-24 17:26:19.192450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:33.665 [2024-07-24 17:26:19.192583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.665 [2024-07-24 17:26:19.244291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.665 [2024-07-24 17:26:19.244948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:33.665 [2024-07-24 17:26:19.245140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.561 ms 00:25:33.665 [2024-07-24 17:26:19.245173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.665 [2024-07-24 17:26:19.245420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.665 [2024-07-24 17:26:19.245447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:33.665 [2024-07-24 17:26:19.245469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:33.665 [2024-07-24 17:26:19.245495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.665 [2024-07-24 17:26:19.246212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.665 [2024-07-24 17:26:19.246243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:33.665 [2024-07-24 17:26:19.246264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:25:33.665 [2024-07-24 17:26:19.246278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.665 [2024-07-24 17:26:19.246495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.665 [2024-07-24 17:26:19.246515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:33.665 [2024-07-24 17:26:19.246545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:25:33.665 [2024-07-24 17:26:19.246567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.665 [2024-07-24 17:26:19.269163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.665 [2024-07-24 17:26:19.269236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:33.665 [2024-07-24 17:26:19.269275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.543 ms 00:25:33.665 [2024-07-24 17:26:19.269287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.665 [2024-07-24 17:26:19.283881] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:33.665 [2024-07-24 17:26:19.305617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.665 [2024-07-24 17:26:19.305741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:33.665 [2024-07-24 17:26:19.305764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.156 ms 00:25:33.665 [2024-07-24 17:26:19.305778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.665 [2024-07-24 17:26:19.382581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.665 [2024-07-24 17:26:19.382699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:33.665 [2024-07-24 17:26:19.382738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.649 ms 00:25:33.665 [2024-07-24 17:26:19.382754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.665 [2024-07-24 17:26:19.383078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.665 [2024-07-24 17:26:19.383110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:33.665 [2024-07-24 17:26:19.383124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:25:33.665 [2024-07-24 17:26:19.383142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.665 [2024-07-24 17:26:19.411464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.665 [2024-07-24 17:26:19.411540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:33.665 [2024-07-24 17:26:19.411560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.278 ms 00:25:33.665 [2024-07-24 17:26:19.411574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.665 [2024-07-24 17:26:19.440000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.666 [2024-07-24 17:26:19.440064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:33.666 [2024-07-24 17:26:19.440100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.283 ms 00:25:33.666 [2024-07-24 17:26:19.440114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.666 [2024-07-24 17:26:19.441032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.666 [2024-07-24 17:26:19.441081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:33.666 [2024-07-24 17:26:19.441098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:25:33.666 [2024-07-24 17:26:19.441113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.666 [2024-07-24 17:26:19.527550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.666 [2024-07-24 17:26:19.527641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:33.666 [2024-07-24 17:26:19.527676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.391 ms 00:25:33.666 [2024-07-24 17:26:19.527716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.666 [2024-07-24 17:26:19.556494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.666 [2024-07-24 17:26:19.556560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:33.666 [2024-07-24 17:26:19.556582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.662 ms 00:25:33.666 [2024-07-24 17:26:19.556596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.666 [2024-07-24 17:26:19.583965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.666 [2024-07-24 17:26:19.584037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:33.666 [2024-07-24 17:26:19.584054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.246 ms 00:25:33.666 [2024-07-24 17:26:19.584067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.666 [2024-07-24 17:26:19.611700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.666 [2024-07-24 17:26:19.611800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:33.666 [2024-07-24 17:26:19.611820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.545 ms 00:25:33.666 [2024-07-24 17:26:19.611834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.666 [2024-07-24 17:26:19.611958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.666 [2024-07-24 17:26:19.611983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:33.666 [2024-07-24 17:26:19.611997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:33.666 [2024-07-24 17:26:19.612014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.666 [2024-07-24 17:26:19.612121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.666 [2024-07-24 17:26:19.612141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:33.666 [2024-07-24 17:26:19.612153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:33.666 [2024-07-24 17:26:19.612189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.666 [2024-07-24 17:26:19.613759] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:33.666 [2024-07-24 17:26:19.617868] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3095.945 ms, result 0 00:25:33.666 [2024-07-24 17:26:19.618848] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:33.666 { 00:25:33.666 "name": "ftl0", 00:25:33.666 "uuid": "1ed9e678-873d-4fa8-9ddb-2bec19870f6b" 00:25:33.666 } 00:25:33.666 17:26:19 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:25:33.666 17:26:19 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:25:33.666 17:26:19 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:33.666 17:26:19 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:25:33.666 17:26:19 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:33.666 17:26:19 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:33.666 17:26:19 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:33.666 17:26:19 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:25:33.925 [ 00:25:33.925 { 00:25:33.925 "name": "ftl0", 00:25:33.925 "aliases": [ 00:25:33.925 "1ed9e678-873d-4fa8-9ddb-2bec19870f6b" 00:25:33.925 ], 00:25:33.925 "product_name": "FTL disk", 00:25:33.925 "block_size": 4096, 00:25:33.925 "num_blocks": 23592960, 00:25:33.925 "uuid": "1ed9e678-873d-4fa8-9ddb-2bec19870f6b", 00:25:33.925 "assigned_rate_limits": { 00:25:33.925 "rw_ios_per_sec": 0, 00:25:33.925 "rw_mbytes_per_sec": 0, 00:25:33.925 "r_mbytes_per_sec": 0, 00:25:33.925 "w_mbytes_per_sec": 0 00:25:33.925 }, 00:25:33.925 "claimed": false, 00:25:33.925 "zoned": false, 00:25:33.925 "supported_io_types": { 00:25:33.925 "read": true, 00:25:33.925 "write": true, 00:25:33.925 "unmap": true, 00:25:33.925 "flush": true, 00:25:33.925 "reset": false, 00:25:33.925 "nvme_admin": false, 00:25:33.925 "nvme_io": false, 00:25:33.925 "nvme_io_md": false, 00:25:33.925 "write_zeroes": true, 00:25:33.925 "zcopy": false, 00:25:33.925 "get_zone_info": false, 00:25:33.925 "zone_management": false, 00:25:33.925 "zone_append": false, 00:25:33.925 "compare": false, 00:25:33.925 "compare_and_write": false, 00:25:33.925 "abort": false, 00:25:33.925 "seek_hole": false, 00:25:33.925 "seek_data": false, 00:25:33.925 "copy": false, 00:25:33.925 "nvme_iov_md": false 00:25:33.925 }, 00:25:33.925 "driver_specific": { 00:25:33.925 "ftl": { 00:25:33.925 "base_bdev": "52812a79-4471-4cce-8fe5-64ffc3de8bd9", 00:25:33.925 "cache": "nvc0n1p0" 00:25:33.925 } 00:25:33.925 } 00:25:33.925 } 00:25:33.925 ] 00:25:33.925 17:26:20 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:25:33.925 17:26:20 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:25:33.925 17:26:20 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:34.184 17:26:20 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:25:34.184 17:26:20 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:25:34.443 17:26:20 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:25:34.443 { 00:25:34.443 "name": "ftl0", 00:25:34.443 "aliases": [ 00:25:34.443 "1ed9e678-873d-4fa8-9ddb-2bec19870f6b" 00:25:34.443 ], 00:25:34.443 "product_name": "FTL disk", 00:25:34.443 "block_size": 4096, 00:25:34.443 "num_blocks": 23592960, 00:25:34.443 "uuid": "1ed9e678-873d-4fa8-9ddb-2bec19870f6b", 00:25:34.443 "assigned_rate_limits": { 00:25:34.443 "rw_ios_per_sec": 0, 00:25:34.443 "rw_mbytes_per_sec": 0, 00:25:34.444 "r_mbytes_per_sec": 0, 00:25:34.444 "w_mbytes_per_sec": 0 00:25:34.444 }, 00:25:34.444 "claimed": false, 00:25:34.444 "zoned": false, 00:25:34.444 "supported_io_types": { 00:25:34.444 "read": true, 00:25:34.444 "write": true, 00:25:34.444 "unmap": true, 00:25:34.444 "flush": true, 00:25:34.444 "reset": false, 00:25:34.444 "nvme_admin": false, 00:25:34.444 "nvme_io": false, 00:25:34.444 "nvme_io_md": false, 00:25:34.444 "write_zeroes": true, 00:25:34.444 "zcopy": false, 00:25:34.444 "get_zone_info": false, 00:25:34.444 "zone_management": false, 00:25:34.444 "zone_append": false, 00:25:34.444 "compare": false, 00:25:34.444 "compare_and_write": false, 00:25:34.444 "abort": false, 00:25:34.444 "seek_hole": false, 00:25:34.444 "seek_data": false, 00:25:34.444 "copy": false, 00:25:34.444 "nvme_iov_md": false 00:25:34.444 }, 00:25:34.444 "driver_specific": { 00:25:34.444 "ftl": { 00:25:34.444 "base_bdev": "52812a79-4471-4cce-8fe5-64ffc3de8bd9", 00:25:34.444 "cache": "nvc0n1p0" 00:25:34.444 } 00:25:34.444 } 00:25:34.444 } 00:25:34.444 ]' 00:25:34.444 17:26:20 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:25:34.444 17:26:20 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:25:34.444 17:26:20 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:34.726 [2024-07-24 17:26:20.901092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.726 [2024-07-24 17:26:20.901160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:34.726 [2024-07-24 17:26:20.901199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:34.726 [2024-07-24 17:26:20.901212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.726 [2024-07-24 17:26:20.901258] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:34.726 [2024-07-24 17:26:20.904908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.726 [2024-07-24 17:26:20.904947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:34.726 [2024-07-24 17:26:20.904963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.627 ms 00:25:34.726 [2024-07-24 17:26:20.904979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.726 [2024-07-24 17:26:20.905496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.726 [2024-07-24 17:26:20.905525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:34.726 [2024-07-24 17:26:20.905539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:25:34.726 [2024-07-24 17:26:20.905556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.726 [2024-07-24 17:26:20.909161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.726 [2024-07-24 17:26:20.909194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:34.726 [2024-07-24 17:26:20.909224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.525 ms 00:25:34.726 [2024-07-24 17:26:20.909237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.726 [2024-07-24 17:26:20.916312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.726 [2024-07-24 17:26:20.916368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:34.726 [2024-07-24 17:26:20.916400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.026 ms 00:25:34.726 [2024-07-24 17:26:20.916413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.991 [2024-07-24 17:26:20.946220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.991 [2024-07-24 17:26:20.946281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:34.991 [2024-07-24 17:26:20.946299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.705 ms 00:25:34.991 [2024-07-24 17:26:20.946316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.991 [2024-07-24 17:26:20.965596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.991 [2024-07-24 17:26:20.965694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:34.991 [2024-07-24 17:26:20.965718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.164 ms 00:25:34.991 [2024-07-24 17:26:20.965733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.991 [2024-07-24 17:26:20.965986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.991 [2024-07-24 17:26:20.966017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:34.991 [2024-07-24 17:26:20.966032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:25:34.991 [2024-07-24 17:26:20.966046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.991 [2024-07-24 17:26:20.995961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.991 [2024-07-24 17:26:20.996022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:34.991 [2024-07-24 17:26:20.996039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.876 ms 00:25:34.991 [2024-07-24 17:26:20.996052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.991 [2024-07-24 17:26:21.023756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.991 [2024-07-24 17:26:21.023801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:34.991 [2024-07-24 17:26:21.023834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.615 ms 00:25:34.991 [2024-07-24 17:26:21.023850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.991 [2024-07-24 17:26:21.051542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.991 [2024-07-24 17:26:21.051603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:34.991 [2024-07-24 17:26:21.051619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.599 ms 00:25:34.991 [2024-07-24 17:26:21.051632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.991 [2024-07-24 17:26:21.078829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.991 [2024-07-24 17:26:21.078891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:34.991 [2024-07-24 17:26:21.078947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.981 ms 00:25:34.991 [2024-07-24 17:26:21.078978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.991 [2024-07-24 17:26:21.079086] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:34.991 [2024-07-24 17:26:21.079117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:34.991 [2024-07-24 17:26:21.079588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.079990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:34.992 [2024-07-24 17:26:21.080633] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:34.992 [2024-07-24 17:26:21.080649] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1ed9e678-873d-4fa8-9ddb-2bec19870f6b 00:25:34.992 [2024-07-24 17:26:21.080666] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:34.992 [2024-07-24 17:26:21.080681] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:34.992 [2024-07-24 17:26:21.080694] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:34.992 [2024-07-24 17:26:21.080706] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:34.992 [2024-07-24 17:26:21.080719] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:34.992 [2024-07-24 17:26:21.080730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:34.992 [2024-07-24 17:26:21.080757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:34.992 [2024-07-24 17:26:21.080768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:34.992 [2024-07-24 17:26:21.080780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:34.992 [2024-07-24 17:26:21.080792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.992 [2024-07-24 17:26:21.080807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:34.992 [2024-07-24 17:26:21.080825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.708 ms 00:25:34.992 [2024-07-24 17:26:21.080838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.992 [2024-07-24 17:26:21.096611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.993 [2024-07-24 17:26:21.096870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:34.993 [2024-07-24 17:26:21.096904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.731 ms 00:25:34.993 [2024-07-24 17:26:21.096924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.993 [2024-07-24 17:26:21.097454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.993 [2024-07-24 17:26:21.097488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:34.993 [2024-07-24 17:26:21.097504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:25:34.993 [2024-07-24 17:26:21.097518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.993 [2024-07-24 17:26:21.156601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.993 [2024-07-24 17:26:21.156681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:34.993 [2024-07-24 17:26:21.156701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.993 [2024-07-24 17:26:21.156716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.993 [2024-07-24 17:26:21.156881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.993 [2024-07-24 17:26:21.156904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:34.993 [2024-07-24 17:26:21.156918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.993 [2024-07-24 17:26:21.156932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.993 [2024-07-24 17:26:21.157023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.993 [2024-07-24 17:26:21.157046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:34.993 [2024-07-24 17:26:21.157059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.993 [2024-07-24 17:26:21.157077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.993 [2024-07-24 17:26:21.157115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.993 [2024-07-24 17:26:21.157133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:34.993 [2024-07-24 17:26:21.157146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.993 [2024-07-24 17:26:21.157159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.252 [2024-07-24 17:26:21.257685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.252 [2024-07-24 17:26:21.257775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:35.252 [2024-07-24 17:26:21.257795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.252 [2024-07-24 17:26:21.257809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.252 [2024-07-24 17:26:21.335216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.252 [2024-07-24 17:26:21.335313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.252 [2024-07-24 17:26:21.335356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.252 [2024-07-24 17:26:21.335370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.252 [2024-07-24 17:26:21.335485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.252 [2024-07-24 17:26:21.335511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.252 [2024-07-24 17:26:21.335524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.252 [2024-07-24 17:26:21.335540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.252 [2024-07-24 17:26:21.335614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.252 [2024-07-24 17:26:21.335631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.252 [2024-07-24 17:26:21.335643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.252 [2024-07-24 17:26:21.335656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.252 [2024-07-24 17:26:21.335872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.252 [2024-07-24 17:26:21.335898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.252 [2024-07-24 17:26:21.335933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.252 [2024-07-24 17:26:21.335947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.252 [2024-07-24 17:26:21.336022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.252 [2024-07-24 17:26:21.336045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:35.252 [2024-07-24 17:26:21.336058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.252 [2024-07-24 17:26:21.336072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.252 [2024-07-24 17:26:21.336135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.252 [2024-07-24 17:26:21.336155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.252 [2024-07-24 17:26:21.336171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.252 [2024-07-24 17:26:21.336187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.252 [2024-07-24 17:26:21.336264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.252 [2024-07-24 17:26:21.336292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.252 [2024-07-24 17:26:21.336306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.252 [2024-07-24 17:26:21.336320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.252 [2024-07-24 17:26:21.336553] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 435.445 ms, result 0 00:25:35.252 true 00:25:35.252 17:26:21 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 79264 00:25:35.252 17:26:21 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79264 ']' 00:25:35.252 17:26:21 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79264 00:25:35.252 17:26:21 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:25:35.252 17:26:21 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:35.253 17:26:21 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79264 00:25:35.253 17:26:21 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:35.253 17:26:21 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:35.253 17:26:21 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79264' 00:25:35.253 killing process with pid 79264 00:25:35.253 17:26:21 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 79264 00:25:35.253 17:26:21 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 79264 00:25:40.522 17:26:26 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:25:41.459 65536+0 records in 00:25:41.459 65536+0 records out 00:25:41.459 268435456 bytes (268 MB, 256 MiB) copied, 1.01482 s, 265 MB/s 00:25:41.459 17:26:27 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:41.459 [2024-07-24 17:26:27.593482] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:25:41.459 [2024-07-24 17:26:27.593631] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79459 ] 00:25:41.718 [2024-07-24 17:26:27.754040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.718 [2024-07-24 17:26:27.949791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.288 [2024-07-24 17:26:28.274189] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:42.288 [2024-07-24 17:26:28.274277] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:42.288 [2024-07-24 17:26:28.436298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.288 [2024-07-24 17:26:28.436356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:42.288 [2024-07-24 17:26:28.436383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:42.288 [2024-07-24 17:26:28.436399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.288 [2024-07-24 17:26:28.439773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.288 [2024-07-24 17:26:28.439819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:42.288 [2024-07-24 17:26:28.439844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.338 ms 00:25:42.288 [2024-07-24 17:26:28.439864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.288 [2024-07-24 17:26:28.440194] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:42.288 [2024-07-24 17:26:28.441174] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:42.288 [2024-07-24 17:26:28.441219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.288 [2024-07-24 17:26:28.441241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:42.288 [2024-07-24 17:26:28.441260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.039 ms 00:25:42.288 [2024-07-24 17:26:28.441277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.288 [2024-07-24 17:26:28.443426] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:42.288 [2024-07-24 17:26:28.458652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.288 [2024-07-24 17:26:28.458698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:42.288 [2024-07-24 17:26:28.458728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.221 ms 00:25:42.288 [2024-07-24 17:26:28.458747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.288 [2024-07-24 17:26:28.458894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.288 [2024-07-24 17:26:28.458950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:42.288 [2024-07-24 17:26:28.458973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:42.288 [2024-07-24 17:26:28.458991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.288 [2024-07-24 17:26:28.467899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.288 [2024-07-24 17:26:28.467945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:42.288 [2024-07-24 17:26:28.467968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.830 ms 00:25:42.288 [2024-07-24 17:26:28.467986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.288 [2024-07-24 17:26:28.468178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.288 [2024-07-24 17:26:28.468211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:42.288 [2024-07-24 17:26:28.468234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:25:42.288 [2024-07-24 17:26:28.468253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.288 [2024-07-24 17:26:28.468334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.288 [2024-07-24 17:26:28.468359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:42.288 [2024-07-24 17:26:28.468385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:42.288 [2024-07-24 17:26:28.468403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.288 [2024-07-24 17:26:28.468457] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:42.288 [2024-07-24 17:26:28.473406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.288 [2024-07-24 17:26:28.473448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:42.288 [2024-07-24 17:26:28.473472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.962 ms 00:25:42.288 [2024-07-24 17:26:28.473491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.288 [2024-07-24 17:26:28.473649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.288 [2024-07-24 17:26:28.473700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:42.288 [2024-07-24 17:26:28.473742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:42.288 [2024-07-24 17:26:28.473764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.288 [2024-07-24 17:26:28.473817] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:42.288 [2024-07-24 17:26:28.473871] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:42.288 [2024-07-24 17:26:28.473935] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:42.288 [2024-07-24 17:26:28.473969] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:42.288 [2024-07-24 17:26:28.474111] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:42.288 [2024-07-24 17:26:28.474139] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:42.288 [2024-07-24 17:26:28.474162] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:42.288 [2024-07-24 17:26:28.474184] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:42.288 [2024-07-24 17:26:28.474206] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:42.288 [2024-07-24 17:26:28.474233] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:42.288 [2024-07-24 17:26:28.474266] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:42.288 [2024-07-24 17:26:28.474284] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:42.288 [2024-07-24 17:26:28.474300] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:42.288 [2024-07-24 17:26:28.474318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.288 [2024-07-24 17:26:28.474351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:42.288 [2024-07-24 17:26:28.474370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:25:42.288 [2024-07-24 17:26:28.474387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.288 [2024-07-24 17:26:28.474519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.288 [2024-07-24 17:26:28.474544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:42.288 [2024-07-24 17:26:28.474569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:25:42.288 [2024-07-24 17:26:28.474587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.288 [2024-07-24 17:26:28.474737] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:42.288 [2024-07-24 17:26:28.474781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:42.288 [2024-07-24 17:26:28.474801] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:42.288 [2024-07-24 17:26:28.474818] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.288 [2024-07-24 17:26:28.474836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:42.288 [2024-07-24 17:26:28.474852] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:42.288 [2024-07-24 17:26:28.474869] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:42.288 [2024-07-24 17:26:28.474885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:42.288 [2024-07-24 17:26:28.474901] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:42.288 [2024-07-24 17:26:28.474916] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:42.288 [2024-07-24 17:26:28.474963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:42.289 [2024-07-24 17:26:28.474981] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:42.289 [2024-07-24 17:26:28.474998] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:42.289 [2024-07-24 17:26:28.475014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:42.289 [2024-07-24 17:26:28.475032] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:42.289 [2024-07-24 17:26:28.475050] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.289 [2024-07-24 17:26:28.475068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:42.289 [2024-07-24 17:26:28.475085] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:42.289 [2024-07-24 17:26:28.475119] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.289 [2024-07-24 17:26:28.475136] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:42.289 [2024-07-24 17:26:28.475153] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:42.289 [2024-07-24 17:26:28.475168] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.289 [2024-07-24 17:26:28.475185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:42.289 [2024-07-24 17:26:28.475201] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:42.289 [2024-07-24 17:26:28.475218] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.289 [2024-07-24 17:26:28.475233] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:42.289 [2024-07-24 17:26:28.475250] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:42.289 [2024-07-24 17:26:28.475281] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.289 [2024-07-24 17:26:28.475306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:42.289 [2024-07-24 17:26:28.475323] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:42.289 [2024-07-24 17:26:28.475339] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.289 [2024-07-24 17:26:28.475355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:42.289 [2024-07-24 17:26:28.475371] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:42.289 [2024-07-24 17:26:28.475387] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:42.289 [2024-07-24 17:26:28.475402] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:42.289 [2024-07-24 17:26:28.475419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:42.289 [2024-07-24 17:26:28.475435] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:42.289 [2024-07-24 17:26:28.475451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:42.289 [2024-07-24 17:26:28.475468] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:42.289 [2024-07-24 17:26:28.475484] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.289 [2024-07-24 17:26:28.475500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:42.289 [2024-07-24 17:26:28.475516] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:42.289 [2024-07-24 17:26:28.475533] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.289 [2024-07-24 17:26:28.475548] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:42.289 [2024-07-24 17:26:28.475565] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:42.289 [2024-07-24 17:26:28.475583] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:42.289 [2024-07-24 17:26:28.475599] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.289 [2024-07-24 17:26:28.475624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:42.289 [2024-07-24 17:26:28.475641] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:42.289 [2024-07-24 17:26:28.475657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:42.289 [2024-07-24 17:26:28.475687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:42.289 [2024-07-24 17:26:28.475707] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:42.289 [2024-07-24 17:26:28.475724] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:42.289 [2024-07-24 17:26:28.475744] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:42.289 [2024-07-24 17:26:28.475765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:42.289 [2024-07-24 17:26:28.475784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:42.289 [2024-07-24 17:26:28.475802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:42.289 [2024-07-24 17:26:28.475820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:42.289 [2024-07-24 17:26:28.475837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:42.289 [2024-07-24 17:26:28.475855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:42.289 [2024-07-24 17:26:28.475874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:42.289 [2024-07-24 17:26:28.475891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:42.289 [2024-07-24 17:26:28.475909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:42.289 [2024-07-24 17:26:28.475927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:42.289 [2024-07-24 17:26:28.475944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:42.289 [2024-07-24 17:26:28.475961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:42.289 [2024-07-24 17:26:28.475980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:42.289 [2024-07-24 17:26:28.475997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:42.289 [2024-07-24 17:26:28.476015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:42.289 [2024-07-24 17:26:28.476032] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:42.289 [2024-07-24 17:26:28.476051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:42.289 [2024-07-24 17:26:28.476069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:42.289 [2024-07-24 17:26:28.476087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:42.289 [2024-07-24 17:26:28.476105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:42.289 [2024-07-24 17:26:28.476124] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:42.289 [2024-07-24 17:26:28.476144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.289 [2024-07-24 17:26:28.476161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:42.289 [2024-07-24 17:26:28.476180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:25:42.289 [2024-07-24 17:26:28.476197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.289 [2024-07-24 17:26:28.519318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.289 [2024-07-24 17:26:28.519382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:42.289 [2024-07-24 17:26:28.519419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.020 ms 00:25:42.289 [2024-07-24 17:26:28.519436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.289 [2024-07-24 17:26:28.519801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.289 [2024-07-24 17:26:28.519841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:42.289 [2024-07-24 17:26:28.519874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:25:42.289 [2024-07-24 17:26:28.519893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.549 [2024-07-24 17:26:28.558713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.549 [2024-07-24 17:26:28.558768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:42.549 [2024-07-24 17:26:28.558793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.765 ms 00:25:42.549 [2024-07-24 17:26:28.558810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.549 [2024-07-24 17:26:28.559048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.549 [2024-07-24 17:26:28.559077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:42.549 [2024-07-24 17:26:28.559098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:42.549 [2024-07-24 17:26:28.559117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.549 [2024-07-24 17:26:28.559861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.549 [2024-07-24 17:26:28.559902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:42.549 [2024-07-24 17:26:28.559927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:25:42.549 [2024-07-24 17:26:28.559948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.549 [2024-07-24 17:26:28.560200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.549 [2024-07-24 17:26:28.560238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:42.549 [2024-07-24 17:26:28.560261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:25:42.549 [2024-07-24 17:26:28.560279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.549 [2024-07-24 17:26:28.577877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.549 [2024-07-24 17:26:28.577921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:42.549 [2024-07-24 17:26:28.577944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.547 ms 00:25:42.549 [2024-07-24 17:26:28.577963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.549 [2024-07-24 17:26:28.592982] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:42.549 [2024-07-24 17:26:28.593030] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:42.549 [2024-07-24 17:26:28.593056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.549 [2024-07-24 17:26:28.593075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:42.549 [2024-07-24 17:26:28.593094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.902 ms 00:25:42.549 [2024-07-24 17:26:28.593110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.549 [2024-07-24 17:26:28.619076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.549 [2024-07-24 17:26:28.619136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:42.549 [2024-07-24 17:26:28.619161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.851 ms 00:25:42.549 [2024-07-24 17:26:28.619180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.549 [2024-07-24 17:26:28.634484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.549 [2024-07-24 17:26:28.634529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:42.549 [2024-07-24 17:26:28.634569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.141 ms 00:25:42.549 [2024-07-24 17:26:28.634588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.549 [2024-07-24 17:26:28.650434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.549 [2024-07-24 17:26:28.650478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:42.549 [2024-07-24 17:26:28.650500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.661 ms 00:25:42.549 [2024-07-24 17:26:28.650517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.549 [2024-07-24 17:26:28.651643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.549 [2024-07-24 17:26:28.651750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:42.549 [2024-07-24 17:26:28.651779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:25:42.549 [2024-07-24 17:26:28.651800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.549 [2024-07-24 17:26:28.722598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.549 [2024-07-24 17:26:28.722687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:42.549 [2024-07-24 17:26:28.722716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.740 ms 00:25:42.549 [2024-07-24 17:26:28.722733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.549 [2024-07-24 17:26:28.733477] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:42.549 [2024-07-24 17:26:28.751977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.549 [2024-07-24 17:26:28.752038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:42.549 [2024-07-24 17:26:28.752065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.928 ms 00:25:42.549 [2024-07-24 17:26:28.752082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.549 [2024-07-24 17:26:28.752289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.549 [2024-07-24 17:26:28.752321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:42.549 [2024-07-24 17:26:28.752351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:42.549 [2024-07-24 17:26:28.752366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.549 [2024-07-24 17:26:28.752465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.549 [2024-07-24 17:26:28.752499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:42.549 [2024-07-24 17:26:28.752520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:42.549 [2024-07-24 17:26:28.752537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.549 [2024-07-24 17:26:28.752588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.549 [2024-07-24 17:26:28.752610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:42.549 [2024-07-24 17:26:28.752629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:42.549 [2024-07-24 17:26:28.752713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.549 [2024-07-24 17:26:28.752784] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:42.549 [2024-07-24 17:26:28.752812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.549 [2024-07-24 17:26:28.752831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:42.550 [2024-07-24 17:26:28.752851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:42.550 [2024-07-24 17:26:28.752870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.550 [2024-07-24 17:26:28.780273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.550 [2024-07-24 17:26:28.780317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:42.550 [2024-07-24 17:26:28.780348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.353 ms 00:25:42.550 [2024-07-24 17:26:28.780367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.550 [2024-07-24 17:26:28.780517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.550 [2024-07-24 17:26:28.780544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:42.550 [2024-07-24 17:26:28.780564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:42.550 [2024-07-24 17:26:28.780580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.550 [2024-07-24 17:26:28.782131] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:42.550 [2024-07-24 17:26:28.785743] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 345.377 ms, result 0 00:25:42.808 [2024-07-24 17:26:28.786626] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:42.808 [2024-07-24 17:26:28.801157] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:54.167  Copying: 22/256 [MB] (22 MBps) Copying: 44/256 [MB] (22 MBps) Copying: 67/256 [MB] (22 MBps) Copying: 89/256 [MB] (21 MBps) Copying: 111/256 [MB] (22 MBps) Copying: 134/256 [MB] (22 MBps) Copying: 157/256 [MB] (22 MBps) Copying: 179/256 [MB] (22 MBps) Copying: 201/256 [MB] (22 MBps) Copying: 224/256 [MB] (22 MBps) Copying: 246/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 22 MBps)[2024-07-24 17:26:40.217603] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:54.167 [2024-07-24 17:26:40.229713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.167 [2024-07-24 17:26:40.229773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:54.167 [2024-07-24 17:26:40.229808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:54.167 [2024-07-24 17:26:40.229820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.167 [2024-07-24 17:26:40.229850] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:54.167 [2024-07-24 17:26:40.233233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.167 [2024-07-24 17:26:40.233290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:54.167 [2024-07-24 17:26:40.233320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.363 ms 00:25:54.167 [2024-07-24 17:26:40.233331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.167 [2024-07-24 17:26:40.235353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.167 [2024-07-24 17:26:40.235412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:54.167 [2024-07-24 17:26:40.235443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.994 ms 00:25:54.167 [2024-07-24 17:26:40.235454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.167 [2024-07-24 17:26:40.242378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.167 [2024-07-24 17:26:40.242438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:54.167 [2024-07-24 17:26:40.242469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.901 ms 00:25:54.167 [2024-07-24 17:26:40.242487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.167 [2024-07-24 17:26:40.249164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.167 [2024-07-24 17:26:40.249219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:54.167 [2024-07-24 17:26:40.249249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.619 ms 00:25:54.167 [2024-07-24 17:26:40.249260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.167 [2024-07-24 17:26:40.277408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.167 [2024-07-24 17:26:40.277466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:54.167 [2024-07-24 17:26:40.277498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.096 ms 00:25:54.167 [2024-07-24 17:26:40.277509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.167 [2024-07-24 17:26:40.294336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.167 [2024-07-24 17:26:40.294396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:54.167 [2024-07-24 17:26:40.294429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.691 ms 00:25:54.167 [2024-07-24 17:26:40.294441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.167 [2024-07-24 17:26:40.294611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.167 [2024-07-24 17:26:40.294645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:54.167 [2024-07-24 17:26:40.294658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:25:54.167 [2024-07-24 17:26:40.294669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.167 [2024-07-24 17:26:40.323636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.167 [2024-07-24 17:26:40.323702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:54.167 [2024-07-24 17:26:40.323734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.932 ms 00:25:54.167 [2024-07-24 17:26:40.323745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.167 [2024-07-24 17:26:40.351727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.167 [2024-07-24 17:26:40.351783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:54.167 [2024-07-24 17:26:40.351813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.922 ms 00:25:54.167 [2024-07-24 17:26:40.351825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.167 [2024-07-24 17:26:40.379796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.167 [2024-07-24 17:26:40.379852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:54.167 [2024-07-24 17:26:40.379883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.912 ms 00:25:54.167 [2024-07-24 17:26:40.379893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.426 [2024-07-24 17:26:40.407171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.426 [2024-07-24 17:26:40.407229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:54.426 [2024-07-24 17:26:40.407260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.161 ms 00:25:54.426 [2024-07-24 17:26:40.407271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.426 [2024-07-24 17:26:40.407332] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:54.426 [2024-07-24 17:26:40.407358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.407995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:54.426 [2024-07-24 17:26:40.408260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:54.427 [2024-07-24 17:26:40.408638] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:54.427 [2024-07-24 17:26:40.408659] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1ed9e678-873d-4fa8-9ddb-2bec19870f6b 00:25:54.427 [2024-07-24 17:26:40.408673] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:54.427 [2024-07-24 17:26:40.408684] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:54.427 [2024-07-24 17:26:40.408695] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:54.427 [2024-07-24 17:26:40.408720] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:54.427 [2024-07-24 17:26:40.408731] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:54.427 [2024-07-24 17:26:40.408743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:54.427 [2024-07-24 17:26:40.408755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:54.427 [2024-07-24 17:26:40.408766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:54.427 [2024-07-24 17:26:40.408776] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:54.427 [2024-07-24 17:26:40.408787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.427 [2024-07-24 17:26:40.408799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:54.427 [2024-07-24 17:26:40.408812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.457 ms 00:25:54.427 [2024-07-24 17:26:40.408828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.427 [2024-07-24 17:26:40.425143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.427 [2024-07-24 17:26:40.425199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:54.427 [2024-07-24 17:26:40.425230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.289 ms 00:25:54.427 [2024-07-24 17:26:40.425241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.427 [2024-07-24 17:26:40.425799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.427 [2024-07-24 17:26:40.425828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:54.427 [2024-07-24 17:26:40.425850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:25:54.427 [2024-07-24 17:26:40.425861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.427 [2024-07-24 17:26:40.462320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.427 [2024-07-24 17:26:40.462384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:54.427 [2024-07-24 17:26:40.462416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.427 [2024-07-24 17:26:40.462426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.427 [2024-07-24 17:26:40.462519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.427 [2024-07-24 17:26:40.462535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:54.427 [2024-07-24 17:26:40.462550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.427 [2024-07-24 17:26:40.462560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.427 [2024-07-24 17:26:40.462628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.427 [2024-07-24 17:26:40.462661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:54.427 [2024-07-24 17:26:40.462689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.427 [2024-07-24 17:26:40.462699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.427 [2024-07-24 17:26:40.462746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.427 [2024-07-24 17:26:40.462760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:54.427 [2024-07-24 17:26:40.462772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.427 [2024-07-24 17:26:40.462789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.427 [2024-07-24 17:26:40.553048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.427 [2024-07-24 17:26:40.553135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:54.427 [2024-07-24 17:26:40.553168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.427 [2024-07-24 17:26:40.553180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.427 [2024-07-24 17:26:40.627418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.427 [2024-07-24 17:26:40.627497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:54.427 [2024-07-24 17:26:40.627537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.427 [2024-07-24 17:26:40.627548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.427 [2024-07-24 17:26:40.627627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.427 [2024-07-24 17:26:40.627644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:54.427 [2024-07-24 17:26:40.627655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.427 [2024-07-24 17:26:40.627703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.427 [2024-07-24 17:26:40.627741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.427 [2024-07-24 17:26:40.627754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:54.427 [2024-07-24 17:26:40.627765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.427 [2024-07-24 17:26:40.627776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.427 [2024-07-24 17:26:40.627948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.427 [2024-07-24 17:26:40.627966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:54.427 [2024-07-24 17:26:40.627980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.427 [2024-07-24 17:26:40.627991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.427 [2024-07-24 17:26:40.628041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.427 [2024-07-24 17:26:40.628059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:54.427 [2024-07-24 17:26:40.628085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.428 [2024-07-24 17:26:40.628096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.428 [2024-07-24 17:26:40.628150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.428 [2024-07-24 17:26:40.628179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:54.428 [2024-07-24 17:26:40.628192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.428 [2024-07-24 17:26:40.628203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.428 [2024-07-24 17:26:40.628258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.428 [2024-07-24 17:26:40.628275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:54.428 [2024-07-24 17:26:40.628287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.428 [2024-07-24 17:26:40.628298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.428 [2024-07-24 17:26:40.628476] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 398.751 ms, result 0 00:25:55.799 00:25:55.799 00:25:55.799 17:26:41 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=79606 00:25:55.799 17:26:41 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:55.799 17:26:41 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 79606 00:25:55.799 17:26:41 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 79606 ']' 00:25:55.799 17:26:41 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.799 17:26:41 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:55.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.799 17:26:41 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.799 17:26:41 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:55.799 17:26:41 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:55.799 [2024-07-24 17:26:41.942471] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:25:55.799 [2024-07-24 17:26:41.942710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79606 ] 00:25:56.057 [2024-07-24 17:26:42.106640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.314 [2024-07-24 17:26:42.303591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.879 17:26:43 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:56.879 17:26:43 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:25:56.879 17:26:43 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:57.137 [2024-07-24 17:26:43.233608] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:57.137 [2024-07-24 17:26:43.233733] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:57.395 [2024-07-24 17:26:43.411902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.395 [2024-07-24 17:26:43.411985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:57.395 [2024-07-24 17:26:43.412021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:57.395 [2024-07-24 17:26:43.412036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.396 [2024-07-24 17:26:43.415411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.396 [2024-07-24 17:26:43.415472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:57.396 [2024-07-24 17:26:43.415504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.315 ms 00:25:57.396 [2024-07-24 17:26:43.415517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.396 [2024-07-24 17:26:43.415674] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:57.396 [2024-07-24 17:26:43.416639] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:57.396 [2024-07-24 17:26:43.416724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.396 [2024-07-24 17:26:43.416758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:57.396 [2024-07-24 17:26:43.416771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.080 ms 00:25:57.396 [2024-07-24 17:26:43.416788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.396 [2024-07-24 17:26:43.418913] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:57.396 [2024-07-24 17:26:43.433815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.396 [2024-07-24 17:26:43.433874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:57.396 [2024-07-24 17:26:43.433909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.900 ms 00:25:57.396 [2024-07-24 17:26:43.433921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.396 [2024-07-24 17:26:43.434034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.396 [2024-07-24 17:26:43.434055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:57.396 [2024-07-24 17:26:43.434071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:57.396 [2024-07-24 17:26:43.434082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.396 [2024-07-24 17:26:43.442639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.396 [2024-07-24 17:26:43.442711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:57.396 [2024-07-24 17:26:43.442750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.444 ms 00:25:57.396 [2024-07-24 17:26:43.442762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.396 [2024-07-24 17:26:43.442898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.396 [2024-07-24 17:26:43.442918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:57.396 [2024-07-24 17:26:43.442947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:25:57.396 [2024-07-24 17:26:43.442963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.396 [2024-07-24 17:26:43.443022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.396 [2024-07-24 17:26:43.443053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:57.396 [2024-07-24 17:26:43.443068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:57.396 [2024-07-24 17:26:43.443080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.396 [2024-07-24 17:26:43.443119] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:57.396 [2024-07-24 17:26:43.447801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.396 [2024-07-24 17:26:43.447842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:57.396 [2024-07-24 17:26:43.447874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.694 ms 00:25:57.396 [2024-07-24 17:26:43.447888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.396 [2024-07-24 17:26:43.447973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.396 [2024-07-24 17:26:43.447998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:57.396 [2024-07-24 17:26:43.448014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:57.396 [2024-07-24 17:26:43.448027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.396 [2024-07-24 17:26:43.448090] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:57.396 [2024-07-24 17:26:43.448121] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:57.396 [2024-07-24 17:26:43.448172] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:57.396 [2024-07-24 17:26:43.448201] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:57.396 [2024-07-24 17:26:43.448306] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:57.396 [2024-07-24 17:26:43.448334] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:57.396 [2024-07-24 17:26:43.448350] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:57.396 [2024-07-24 17:26:43.448368] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:57.396 [2024-07-24 17:26:43.448382] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:57.396 [2024-07-24 17:26:43.448398] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:57.396 [2024-07-24 17:26:43.448410] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:57.396 [2024-07-24 17:26:43.448424] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:57.396 [2024-07-24 17:26:43.448436] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:57.396 [2024-07-24 17:26:43.448455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.396 [2024-07-24 17:26:43.448467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:57.396 [2024-07-24 17:26:43.448482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:25:57.396 [2024-07-24 17:26:43.448496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.396 [2024-07-24 17:26:43.448595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.396 [2024-07-24 17:26:43.448617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:57.396 [2024-07-24 17:26:43.448666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:57.396 [2024-07-24 17:26:43.448681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.396 [2024-07-24 17:26:43.448806] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:57.396 [2024-07-24 17:26:43.448826] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:57.396 [2024-07-24 17:26:43.448842] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:57.396 [2024-07-24 17:26:43.448854] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.396 [2024-07-24 17:26:43.448874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:57.396 [2024-07-24 17:26:43.448886] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:57.396 [2024-07-24 17:26:43.448900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:57.396 [2024-07-24 17:26:43.448911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:57.396 [2024-07-24 17:26:43.448927] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:57.396 [2024-07-24 17:26:43.448938] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:57.396 [2024-07-24 17:26:43.448952] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:57.396 [2024-07-24 17:26:43.448963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:57.396 [2024-07-24 17:26:43.448976] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:57.396 [2024-07-24 17:26:43.448987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:57.396 [2024-07-24 17:26:43.449000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:57.396 [2024-07-24 17:26:43.449011] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.396 [2024-07-24 17:26:43.449024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:57.396 [2024-07-24 17:26:43.449035] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:57.396 [2024-07-24 17:26:43.449048] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.396 [2024-07-24 17:26:43.449060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:57.396 [2024-07-24 17:26:43.449074] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:57.396 [2024-07-24 17:26:43.449087] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.396 [2024-07-24 17:26:43.449101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:57.396 [2024-07-24 17:26:43.449112] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:57.396 [2024-07-24 17:26:43.449128] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.396 [2024-07-24 17:26:43.449139] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:57.396 [2024-07-24 17:26:43.449152] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:57.396 [2024-07-24 17:26:43.449173] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.396 [2024-07-24 17:26:43.449189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:57.396 [2024-07-24 17:26:43.449201] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:57.396 [2024-07-24 17:26:43.449214] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.396 [2024-07-24 17:26:43.449225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:57.396 [2024-07-24 17:26:43.449238] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:57.396 [2024-07-24 17:26:43.449249] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:57.396 [2024-07-24 17:26:43.449263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:57.396 [2024-07-24 17:26:43.449274] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:57.396 [2024-07-24 17:26:43.449287] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:57.396 [2024-07-24 17:26:43.449298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:57.396 [2024-07-24 17:26:43.449311] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:57.396 [2024-07-24 17:26:43.449321] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.396 [2024-07-24 17:26:43.449337] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:57.396 [2024-07-24 17:26:43.449348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:57.397 [2024-07-24 17:26:43.449361] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.397 [2024-07-24 17:26:43.449372] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:57.397 [2024-07-24 17:26:43.449386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:57.397 [2024-07-24 17:26:43.449398] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:57.397 [2024-07-24 17:26:43.449412] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.397 [2024-07-24 17:26:43.449425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:57.397 [2024-07-24 17:26:43.449438] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:57.397 [2024-07-24 17:26:43.449449] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:57.397 [2024-07-24 17:26:43.449463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:57.397 [2024-07-24 17:26:43.449474] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:57.397 [2024-07-24 17:26:43.449487] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:57.397 [2024-07-24 17:26:43.449501] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:57.397 [2024-07-24 17:26:43.449518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:57.397 [2024-07-24 17:26:43.449543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:57.397 [2024-07-24 17:26:43.449560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:57.397 [2024-07-24 17:26:43.449572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:57.397 [2024-07-24 17:26:43.449587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:57.397 [2024-07-24 17:26:43.449599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:57.397 [2024-07-24 17:26:43.449613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:57.397 [2024-07-24 17:26:43.449625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:57.397 [2024-07-24 17:26:43.449639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:57.397 [2024-07-24 17:26:43.449666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:57.397 [2024-07-24 17:26:43.449682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:57.397 [2024-07-24 17:26:43.449695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:57.397 [2024-07-24 17:26:43.449709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:57.397 [2024-07-24 17:26:43.449720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:57.397 [2024-07-24 17:26:43.449735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:57.397 [2024-07-24 17:26:43.449747] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:57.397 [2024-07-24 17:26:43.449763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:57.397 [2024-07-24 17:26:43.449776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:57.397 [2024-07-24 17:26:43.449793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:57.397 [2024-07-24 17:26:43.449805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:57.397 [2024-07-24 17:26:43.449819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:57.397 [2024-07-24 17:26:43.449832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.397 [2024-07-24 17:26:43.449846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:57.397 [2024-07-24 17:26:43.449859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.094 ms 00:25:57.397 [2024-07-24 17:26:43.449877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.397 [2024-07-24 17:26:43.486617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.397 [2024-07-24 17:26:43.486736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:57.397 [2024-07-24 17:26:43.486762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.656 ms 00:25:57.397 [2024-07-24 17:26:43.486777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.397 [2024-07-24 17:26:43.486989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.397 [2024-07-24 17:26:43.487015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:57.397 [2024-07-24 17:26:43.487045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:25:57.397 [2024-07-24 17:26:43.487075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.397 [2024-07-24 17:26:43.526069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.397 [2024-07-24 17:26:43.526169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:57.397 [2024-07-24 17:26:43.526189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.962 ms 00:25:57.397 [2024-07-24 17:26:43.526206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.397 [2024-07-24 17:26:43.526334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.397 [2024-07-24 17:26:43.526361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:57.397 [2024-07-24 17:26:43.526376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:57.397 [2024-07-24 17:26:43.526408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.397 [2024-07-24 17:26:43.527056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.397 [2024-07-24 17:26:43.527141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:57.397 [2024-07-24 17:26:43.527157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:25:57.397 [2024-07-24 17:26:43.527174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.397 [2024-07-24 17:26:43.527376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.397 [2024-07-24 17:26:43.527417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:57.397 [2024-07-24 17:26:43.527431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:25:57.397 [2024-07-24 17:26:43.527448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.397 [2024-07-24 17:26:43.547499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.397 [2024-07-24 17:26:43.547588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:57.397 [2024-07-24 17:26:43.547605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.019 ms 00:25:57.397 [2024-07-24 17:26:43.547622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.397 [2024-07-24 17:26:43.562743] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:57.397 [2024-07-24 17:26:43.562804] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:57.397 [2024-07-24 17:26:43.562841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.397 [2024-07-24 17:26:43.562855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:57.397 [2024-07-24 17:26:43.562868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.058 ms 00:25:57.397 [2024-07-24 17:26:43.562881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.397 [2024-07-24 17:26:43.589667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.397 [2024-07-24 17:26:43.589759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:57.397 [2024-07-24 17:26:43.589777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.685 ms 00:25:57.397 [2024-07-24 17:26:43.589795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.397 [2024-07-24 17:26:43.604639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.397 [2024-07-24 17:26:43.604724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:57.397 [2024-07-24 17:26:43.604750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.729 ms 00:25:57.397 [2024-07-24 17:26:43.604766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.397 [2024-07-24 17:26:43.618583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.397 [2024-07-24 17:26:43.618669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:57.397 [2024-07-24 17:26:43.618685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.734 ms 00:25:57.397 [2024-07-24 17:26:43.618699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.397 [2024-07-24 17:26:43.619681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.397 [2024-07-24 17:26:43.619751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:57.397 [2024-07-24 17:26:43.619767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:25:57.397 [2024-07-24 17:26:43.619783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.655 [2024-07-24 17:26:43.701481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.655 [2024-07-24 17:26:43.701600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:57.655 [2024-07-24 17:26:43.701623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.665 ms 00:25:57.655 [2024-07-24 17:26:43.701640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.655 [2024-07-24 17:26:43.712398] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:57.655 [2024-07-24 17:26:43.731166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.655 [2024-07-24 17:26:43.731257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:57.655 [2024-07-24 17:26:43.731294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.325 ms 00:25:57.655 [2024-07-24 17:26:43.731306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.655 [2024-07-24 17:26:43.731440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.655 [2024-07-24 17:26:43.731460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:57.655 [2024-07-24 17:26:43.731478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:57.655 [2024-07-24 17:26:43.731490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.655 [2024-07-24 17:26:43.731600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.655 [2024-07-24 17:26:43.731617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:57.655 [2024-07-24 17:26:43.731641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:57.655 [2024-07-24 17:26:43.731710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.655 [2024-07-24 17:26:43.731755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.655 [2024-07-24 17:26:43.731771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:57.655 [2024-07-24 17:26:43.731789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:57.655 [2024-07-24 17:26:43.731801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.655 [2024-07-24 17:26:43.731857] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:57.655 [2024-07-24 17:26:43.731874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.655 [2024-07-24 17:26:43.731896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:57.655 [2024-07-24 17:26:43.731910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:57.655 [2024-07-24 17:26:43.731934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.655 [2024-07-24 17:26:43.759215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.655 [2024-07-24 17:26:43.759320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:57.655 [2024-07-24 17:26:43.759337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.249 ms 00:25:57.655 [2024-07-24 17:26:43.759353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.655 [2024-07-24 17:26:43.759472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.655 [2024-07-24 17:26:43.759525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:57.655 [2024-07-24 17:26:43.759576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:57.655 [2024-07-24 17:26:43.759593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.655 [2024-07-24 17:26:43.760995] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:57.655 [2024-07-24 17:26:43.764752] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 348.562 ms, result 0 00:25:57.655 [2024-07-24 17:26:43.766452] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:57.655 Some configs were skipped because the RPC state that can call them passed over. 00:25:57.655 17:26:43 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:57.913 [2024-07-24 17:26:44.002689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.913 [2024-07-24 17:26:44.002792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:57.913 [2024-07-24 17:26:44.002820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.726 ms 00:25:57.913 [2024-07-24 17:26:44.002833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.913 [2024-07-24 17:26:44.002901] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.935 ms, result 0 00:25:57.913 true 00:25:57.913 17:26:44 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:25:58.170 [2024-07-24 17:26:44.282601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.170 [2024-07-24 17:26:44.282733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:58.170 [2024-07-24 17:26:44.282771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.355 ms 00:25:58.170 [2024-07-24 17:26:44.282787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.170 [2024-07-24 17:26:44.282877] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.600 ms, result 0 00:25:58.170 true 00:25:58.170 17:26:44 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 79606 00:25:58.170 17:26:44 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79606 ']' 00:25:58.170 17:26:44 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79606 00:25:58.170 17:26:44 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:25:58.170 17:26:44 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:58.170 17:26:44 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79606 00:25:58.170 17:26:44 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:58.170 17:26:44 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:58.170 killing process with pid 79606 00:25:58.170 17:26:44 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79606' 00:25:58.170 17:26:44 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 79606 00:25:58.171 17:26:44 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 79606 00:25:59.103 [2024-07-24 17:26:45.240241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.103 [2024-07-24 17:26:45.240327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:59.103 [2024-07-24 17:26:45.240366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:59.103 [2024-07-24 17:26:45.240380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.103 [2024-07-24 17:26:45.240414] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:59.103 [2024-07-24 17:26:45.243807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.103 [2024-07-24 17:26:45.243844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:59.103 [2024-07-24 17:26:45.243874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.374 ms 00:25:59.103 [2024-07-24 17:26:45.243889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.103 [2024-07-24 17:26:45.244219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.103 [2024-07-24 17:26:45.244250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:59.103 [2024-07-24 17:26:45.244265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:25:59.103 [2024-07-24 17:26:45.244278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.103 [2024-07-24 17:26:45.248047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.103 [2024-07-24 17:26:45.248125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:59.103 [2024-07-24 17:26:45.248171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.744 ms 00:25:59.103 [2024-07-24 17:26:45.248185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.103 [2024-07-24 17:26:45.254571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.103 [2024-07-24 17:26:45.254642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:59.103 [2024-07-24 17:26:45.254683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.341 ms 00:25:59.103 [2024-07-24 17:26:45.254698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.103 [2024-07-24 17:26:45.266103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.103 [2024-07-24 17:26:45.266183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:59.103 [2024-07-24 17:26:45.266199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.350 ms 00:25:59.103 [2024-07-24 17:26:45.266215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.103 [2024-07-24 17:26:45.274849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.103 [2024-07-24 17:26:45.274913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:59.103 [2024-07-24 17:26:45.274969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.592 ms 00:25:59.103 [2024-07-24 17:26:45.274984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.103 [2024-07-24 17:26:45.275137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.103 [2024-07-24 17:26:45.275176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:59.103 [2024-07-24 17:26:45.275206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:25:59.103 [2024-07-24 17:26:45.275264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.103 [2024-07-24 17:26:45.286862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.103 [2024-07-24 17:26:45.286932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:59.103 [2024-07-24 17:26:45.286967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.572 ms 00:25:59.103 [2024-07-24 17:26:45.286984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.103 [2024-07-24 17:26:45.298301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.103 [2024-07-24 17:26:45.298361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:59.103 [2024-07-24 17:26:45.298377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.273 ms 00:25:59.103 [2024-07-24 17:26:45.298400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.103 [2024-07-24 17:26:45.310190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.103 [2024-07-24 17:26:45.310266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:59.103 [2024-07-24 17:26:45.310282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.747 ms 00:25:59.103 [2024-07-24 17:26:45.310298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.103 [2024-07-24 17:26:45.322251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.103 [2024-07-24 17:26:45.322324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:59.103 [2024-07-24 17:26:45.322356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.879 ms 00:25:59.103 [2024-07-24 17:26:45.322371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.103 [2024-07-24 17:26:45.322412] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:59.103 [2024-07-24 17:26:45.322443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:59.103 [2024-07-24 17:26:45.322828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.322840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.322862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.322875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.322893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.322905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.322932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.322957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.322977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.322990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.323993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.324009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.324021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.324038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.324050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.324068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.324081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.324098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.324110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.324127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.324140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:59.104 [2024-07-24 17:26:45.324166] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:59.104 [2024-07-24 17:26:45.324179] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1ed9e678-873d-4fa8-9ddb-2bec19870f6b 00:25:59.104 [2024-07-24 17:26:45.324202] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:59.104 [2024-07-24 17:26:45.324214] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:59.104 [2024-07-24 17:26:45.324230] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:59.105 [2024-07-24 17:26:45.324242] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:59.105 [2024-07-24 17:26:45.324258] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:59.105 [2024-07-24 17:26:45.324270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:59.105 [2024-07-24 17:26:45.324285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:59.105 [2024-07-24 17:26:45.324296] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:59.105 [2024-07-24 17:26:45.324327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:59.105 [2024-07-24 17:26:45.324339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.105 [2024-07-24 17:26:45.324356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:59.105 [2024-07-24 17:26:45.324370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.929 ms 00:25:59.105 [2024-07-24 17:26:45.324393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.423 [2024-07-24 17:26:45.341468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.423 [2024-07-24 17:26:45.341531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:59.423 [2024-07-24 17:26:45.341548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.034 ms 00:25:59.423 [2024-07-24 17:26:45.341570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.423 [2024-07-24 17:26:45.342160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.423 [2024-07-24 17:26:45.342203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:59.423 [2024-07-24 17:26:45.342225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:25:59.423 [2024-07-24 17:26:45.342243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.423 [2024-07-24 17:26:45.393637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.423 [2024-07-24 17:26:45.393769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:59.423 [2024-07-24 17:26:45.393790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.423 [2024-07-24 17:26:45.393807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.423 [2024-07-24 17:26:45.393947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.423 [2024-07-24 17:26:45.393972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:59.423 [2024-07-24 17:26:45.393991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.423 [2024-07-24 17:26:45.394023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.423 [2024-07-24 17:26:45.394136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.423 [2024-07-24 17:26:45.394163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:59.423 [2024-07-24 17:26:45.394177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.423 [2024-07-24 17:26:45.394198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.423 [2024-07-24 17:26:45.394225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.423 [2024-07-24 17:26:45.394246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:59.423 [2024-07-24 17:26:45.394259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.423 [2024-07-24 17:26:45.394281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.423 [2024-07-24 17:26:45.480039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.423 [2024-07-24 17:26:45.480144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:59.423 [2024-07-24 17:26:45.480164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.423 [2024-07-24 17:26:45.480181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.423 [2024-07-24 17:26:45.549046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.423 [2024-07-24 17:26:45.549135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:59.423 [2024-07-24 17:26:45.549158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.423 [2024-07-24 17:26:45.549175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.423 [2024-07-24 17:26:45.549257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.423 [2024-07-24 17:26:45.549283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:59.423 [2024-07-24 17:26:45.549296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.423 [2024-07-24 17:26:45.549316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.423 [2024-07-24 17:26:45.549352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.423 [2024-07-24 17:26:45.549371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:59.423 [2024-07-24 17:26:45.549399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.423 [2024-07-24 17:26:45.549430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.423 [2024-07-24 17:26:45.549568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.423 [2024-07-24 17:26:45.549597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:59.423 [2024-07-24 17:26:45.549611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.424 [2024-07-24 17:26:45.549628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.424 [2024-07-24 17:26:45.549696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.424 [2024-07-24 17:26:45.549728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:59.424 [2024-07-24 17:26:45.549743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.424 [2024-07-24 17:26:45.549777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.424 [2024-07-24 17:26:45.549835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.424 [2024-07-24 17:26:45.549857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:59.424 [2024-07-24 17:26:45.549870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.424 [2024-07-24 17:26:45.549891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.424 [2024-07-24 17:26:45.549962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.424 [2024-07-24 17:26:45.549993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:59.424 [2024-07-24 17:26:45.550006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.424 [2024-07-24 17:26:45.550023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.424 [2024-07-24 17:26:45.550220] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 309.953 ms, result 0 00:26:00.370 17:26:46 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:26:00.370 17:26:46 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:00.370 [2024-07-24 17:26:46.529364] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:26:00.370 [2024-07-24 17:26:46.529538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79670 ] 00:26:00.627 [2024-07-24 17:26:46.687493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.885 [2024-07-24 17:26:46.898507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.142 [2024-07-24 17:26:47.212183] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:01.142 [2024-07-24 17:26:47.212291] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:01.142 [2024-07-24 17:26:47.372917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.142 [2024-07-24 17:26:47.372994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:01.142 [2024-07-24 17:26:47.373028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:01.143 [2024-07-24 17:26:47.373039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.143 [2024-07-24 17:26:47.376156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.143 [2024-07-24 17:26:47.376218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:01.143 [2024-07-24 17:26:47.376249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.092 ms 00:26:01.143 [2024-07-24 17:26:47.376260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.143 [2024-07-24 17:26:47.376391] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:01.143 [2024-07-24 17:26:47.377340] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:01.143 [2024-07-24 17:26:47.377409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.143 [2024-07-24 17:26:47.377437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:01.143 [2024-07-24 17:26:47.377449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 00:26:01.143 [2024-07-24 17:26:47.377459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.143 [2024-07-24 17:26:47.379648] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:01.402 [2024-07-24 17:26:47.394424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.402 [2024-07-24 17:26:47.394482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:01.402 [2024-07-24 17:26:47.394518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.777 ms 00:26:01.402 [2024-07-24 17:26:47.394529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.402 [2024-07-24 17:26:47.394638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.402 [2024-07-24 17:26:47.394671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:01.402 [2024-07-24 17:26:47.394684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:01.402 [2024-07-24 17:26:47.394694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.402 [2024-07-24 17:26:47.403378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.402 [2024-07-24 17:26:47.403436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:01.402 [2024-07-24 17:26:47.403466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.580 ms 00:26:01.402 [2024-07-24 17:26:47.403476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.402 [2024-07-24 17:26:47.403598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.402 [2024-07-24 17:26:47.403618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:01.402 [2024-07-24 17:26:47.403630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:26:01.402 [2024-07-24 17:26:47.403640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.402 [2024-07-24 17:26:47.403750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.402 [2024-07-24 17:26:47.403769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:01.402 [2024-07-24 17:26:47.403801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:01.402 [2024-07-24 17:26:47.403813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.402 [2024-07-24 17:26:47.403851] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:01.402 [2024-07-24 17:26:47.408352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.402 [2024-07-24 17:26:47.408404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:01.402 [2024-07-24 17:26:47.408433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.510 ms 00:26:01.402 [2024-07-24 17:26:47.408443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.402 [2024-07-24 17:26:47.408525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.402 [2024-07-24 17:26:47.408544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:01.402 [2024-07-24 17:26:47.408555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:01.402 [2024-07-24 17:26:47.408565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.402 [2024-07-24 17:26:47.408595] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:01.402 [2024-07-24 17:26:47.408624] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:01.402 [2024-07-24 17:26:47.408717] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:01.402 [2024-07-24 17:26:47.408743] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:01.402 [2024-07-24 17:26:47.408844] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:01.402 [2024-07-24 17:26:47.408860] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:01.402 [2024-07-24 17:26:47.408874] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:01.402 [2024-07-24 17:26:47.408889] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:01.402 [2024-07-24 17:26:47.408902] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:01.402 [2024-07-24 17:26:47.408919] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:01.402 [2024-07-24 17:26:47.408931] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:01.402 [2024-07-24 17:26:47.408941] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:01.402 [2024-07-24 17:26:47.408952] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:01.402 [2024-07-24 17:26:47.408964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.402 [2024-07-24 17:26:47.408974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:01.402 [2024-07-24 17:26:47.408986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:26:01.402 [2024-07-24 17:26:47.408997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.402 [2024-07-24 17:26:47.409089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.402 [2024-07-24 17:26:47.409105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:01.402 [2024-07-24 17:26:47.409123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:26:01.402 [2024-07-24 17:26:47.409133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.402 [2024-07-24 17:26:47.409239] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:01.402 [2024-07-24 17:26:47.409255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:01.402 [2024-07-24 17:26:47.409267] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:01.402 [2024-07-24 17:26:47.409279] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:01.402 [2024-07-24 17:26:47.409290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:01.402 [2024-07-24 17:26:47.409300] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:01.402 [2024-07-24 17:26:47.409310] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:01.402 [2024-07-24 17:26:47.409319] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:01.402 [2024-07-24 17:26:47.409329] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:01.402 [2024-07-24 17:26:47.409339] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:01.402 [2024-07-24 17:26:47.409349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:01.402 [2024-07-24 17:26:47.409359] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:01.402 [2024-07-24 17:26:47.409368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:01.402 [2024-07-24 17:26:47.409378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:01.402 [2024-07-24 17:26:47.409388] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:01.402 [2024-07-24 17:26:47.409397] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:01.402 [2024-07-24 17:26:47.409407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:01.403 [2024-07-24 17:26:47.409418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:01.403 [2024-07-24 17:26:47.409442] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:01.403 [2024-07-24 17:26:47.409453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:01.403 [2024-07-24 17:26:47.409463] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:01.403 [2024-07-24 17:26:47.409473] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:01.403 [2024-07-24 17:26:47.409483] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:01.403 [2024-07-24 17:26:47.409493] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:01.403 [2024-07-24 17:26:47.409503] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:01.403 [2024-07-24 17:26:47.409512] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:01.403 [2024-07-24 17:26:47.409522] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:01.403 [2024-07-24 17:26:47.409531] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:01.403 [2024-07-24 17:26:47.409541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:01.403 [2024-07-24 17:26:47.409550] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:01.403 [2024-07-24 17:26:47.409560] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:01.403 [2024-07-24 17:26:47.409569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:01.403 [2024-07-24 17:26:47.409579] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:01.403 [2024-07-24 17:26:47.409589] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:01.403 [2024-07-24 17:26:47.409599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:01.403 [2024-07-24 17:26:47.409609] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:01.403 [2024-07-24 17:26:47.409618] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:01.403 [2024-07-24 17:26:47.409628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:01.403 [2024-07-24 17:26:47.409638] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:01.403 [2024-07-24 17:26:47.409662] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:01.403 [2024-07-24 17:26:47.409675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:01.403 [2024-07-24 17:26:47.409684] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:01.403 [2024-07-24 17:26:47.409694] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:01.403 [2024-07-24 17:26:47.409704] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:01.403 [2024-07-24 17:26:47.409715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:01.403 [2024-07-24 17:26:47.409726] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:01.403 [2024-07-24 17:26:47.409736] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:01.403 [2024-07-24 17:26:47.409752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:01.403 [2024-07-24 17:26:47.409763] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:01.403 [2024-07-24 17:26:47.409774] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:01.403 [2024-07-24 17:26:47.409785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:01.403 [2024-07-24 17:26:47.409794] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:01.403 [2024-07-24 17:26:47.409804] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:01.403 [2024-07-24 17:26:47.409816] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:01.403 [2024-07-24 17:26:47.409830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:01.403 [2024-07-24 17:26:47.409842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:01.403 [2024-07-24 17:26:47.409854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:01.403 [2024-07-24 17:26:47.409865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:01.403 [2024-07-24 17:26:47.409876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:01.403 [2024-07-24 17:26:47.409887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:01.403 [2024-07-24 17:26:47.409899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:01.403 [2024-07-24 17:26:47.409910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:01.403 [2024-07-24 17:26:47.409921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:01.403 [2024-07-24 17:26:47.409932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:01.403 [2024-07-24 17:26:47.409943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:01.403 [2024-07-24 17:26:47.409953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:01.403 [2024-07-24 17:26:47.409964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:01.403 [2024-07-24 17:26:47.409975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:01.403 [2024-07-24 17:26:47.409986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:01.403 [2024-07-24 17:26:47.409997] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:01.403 [2024-07-24 17:26:47.410010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:01.403 [2024-07-24 17:26:47.410021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:01.403 [2024-07-24 17:26:47.410033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:01.403 [2024-07-24 17:26:47.410045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:01.403 [2024-07-24 17:26:47.410056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:01.403 [2024-07-24 17:26:47.410067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.403 [2024-07-24 17:26:47.410078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:01.403 [2024-07-24 17:26:47.410090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.890 ms 00:26:01.403 [2024-07-24 17:26:47.410100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.403 [2024-07-24 17:26:47.463198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.403 [2024-07-24 17:26:47.463288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:01.403 [2024-07-24 17:26:47.463314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.024 ms 00:26:01.403 [2024-07-24 17:26:47.463325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.403 [2024-07-24 17:26:47.463586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.403 [2024-07-24 17:26:47.463613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:01.403 [2024-07-24 17:26:47.463634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:26:01.403 [2024-07-24 17:26:47.463659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.403 [2024-07-24 17:26:47.502273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.403 [2024-07-24 17:26:47.502346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:01.403 [2024-07-24 17:26:47.502380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.576 ms 00:26:01.403 [2024-07-24 17:26:47.502391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.403 [2024-07-24 17:26:47.502560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.403 [2024-07-24 17:26:47.502595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:01.403 [2024-07-24 17:26:47.502639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:01.403 [2024-07-24 17:26:47.502651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.403 [2024-07-24 17:26:47.503266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.403 [2024-07-24 17:26:47.503301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:01.403 [2024-07-24 17:26:47.503316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:26:01.403 [2024-07-24 17:26:47.503326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.403 [2024-07-24 17:26:47.503510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.403 [2024-07-24 17:26:47.503530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:01.403 [2024-07-24 17:26:47.503543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:26:01.403 [2024-07-24 17:26:47.503554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.403 [2024-07-24 17:26:47.521086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.403 [2024-07-24 17:26:47.521127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:01.403 [2024-07-24 17:26:47.521158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.502 ms 00:26:01.403 [2024-07-24 17:26:47.521169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.403 [2024-07-24 17:26:47.536524] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:26:01.403 [2024-07-24 17:26:47.536565] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:01.403 [2024-07-24 17:26:47.536597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.403 [2024-07-24 17:26:47.536608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:01.403 [2024-07-24 17:26:47.536621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.273 ms 00:26:01.403 [2024-07-24 17:26:47.536631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.403 [2024-07-24 17:26:47.563351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.403 [2024-07-24 17:26:47.563394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:01.404 [2024-07-24 17:26:47.563426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.600 ms 00:26:01.404 [2024-07-24 17:26:47.563437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.404 [2024-07-24 17:26:47.577562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.404 [2024-07-24 17:26:47.577604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:01.404 [2024-07-24 17:26:47.577634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.034 ms 00:26:01.404 [2024-07-24 17:26:47.577644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.404 [2024-07-24 17:26:47.591460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.404 [2024-07-24 17:26:47.591500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:01.404 [2024-07-24 17:26:47.591529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.700 ms 00:26:01.404 [2024-07-24 17:26:47.591539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.404 [2024-07-24 17:26:47.592529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.404 [2024-07-24 17:26:47.592562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:01.404 [2024-07-24 17:26:47.592593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:26:01.404 [2024-07-24 17:26:47.592633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.661 [2024-07-24 17:26:47.661621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.661 [2024-07-24 17:26:47.661745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:01.661 [2024-07-24 17:26:47.661783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.957 ms 00:26:01.661 [2024-07-24 17:26:47.661795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.661 [2024-07-24 17:26:47.673231] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:01.661 [2024-07-24 17:26:47.694567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.662 [2024-07-24 17:26:47.694638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:01.662 [2024-07-24 17:26:47.694721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.610 ms 00:26:01.662 [2024-07-24 17:26:47.694733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.662 [2024-07-24 17:26:47.694915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.662 [2024-07-24 17:26:47.694964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:01.662 [2024-07-24 17:26:47.694982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:01.662 [2024-07-24 17:26:47.694994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.662 [2024-07-24 17:26:47.695070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.662 [2024-07-24 17:26:47.695087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:01.662 [2024-07-24 17:26:47.695101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:26:01.662 [2024-07-24 17:26:47.695112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.662 [2024-07-24 17:26:47.695148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.662 [2024-07-24 17:26:47.695168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:01.662 [2024-07-24 17:26:47.695180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:01.662 [2024-07-24 17:26:47.695192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.662 [2024-07-24 17:26:47.695232] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:01.662 [2024-07-24 17:26:47.695249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.662 [2024-07-24 17:26:47.695276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:01.662 [2024-07-24 17:26:47.695287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:01.662 [2024-07-24 17:26:47.695297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.662 [2024-07-24 17:26:47.723682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.662 [2024-07-24 17:26:47.723737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:01.662 [2024-07-24 17:26:47.723769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.354 ms 00:26:01.662 [2024-07-24 17:26:47.723779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.662 [2024-07-24 17:26:47.723902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.662 [2024-07-24 17:26:47.723920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:01.662 [2024-07-24 17:26:47.723932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:01.662 [2024-07-24 17:26:47.723942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.662 [2024-07-24 17:26:47.725251] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:01.662 [2024-07-24 17:26:47.729032] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 351.951 ms, result 0 00:26:01.662 [2024-07-24 17:26:47.730035] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:01.662 [2024-07-24 17:26:47.744781] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:13.373  Copying: 25/256 [MB] (25 MBps) Copying: 47/256 [MB] (22 MBps) Copying: 69/256 [MB] (22 MBps) Copying: 91/256 [MB] (21 MBps) Copying: 113/256 [MB] (21 MBps) Copying: 134/256 [MB] (21 MBps) Copying: 156/256 [MB] (21 MBps) Copying: 177/256 [MB] (21 MBps) Copying: 198/256 [MB] (21 MBps) Copying: 220/256 [MB] (21 MBps) Copying: 241/256 [MB] (21 MBps) Copying: 256/256 [MB] (average 21 MBps)[2024-07-24 17:26:59.402958] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:13.373 [2024-07-24 17:26:59.413829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.373 [2024-07-24 17:26:59.413866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:13.373 [2024-07-24 17:26:59.413900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:13.373 [2024-07-24 17:26:59.413910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.373 [2024-07-24 17:26:59.413943] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:13.373 [2024-07-24 17:26:59.417089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.373 [2024-07-24 17:26:59.417118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:13.373 [2024-07-24 17:26:59.417148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.127 ms 00:26:13.373 [2024-07-24 17:26:59.417157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.373 [2024-07-24 17:26:59.417404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.373 [2024-07-24 17:26:59.417421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:13.373 [2024-07-24 17:26:59.417433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:26:13.373 [2024-07-24 17:26:59.417447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.373 [2024-07-24 17:26:59.420552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.373 [2024-07-24 17:26:59.420580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:13.373 [2024-07-24 17:26:59.420615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.086 ms 00:26:13.373 [2024-07-24 17:26:59.420625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.373 [2024-07-24 17:26:59.426693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.373 [2024-07-24 17:26:59.426720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:13.373 [2024-07-24 17:26:59.426749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.047 ms 00:26:13.373 [2024-07-24 17:26:59.426758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.373 [2024-07-24 17:26:59.451720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.373 [2024-07-24 17:26:59.451779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:13.373 [2024-07-24 17:26:59.451793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.902 ms 00:26:13.373 [2024-07-24 17:26:59.451803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.373 [2024-07-24 17:26:59.467013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.373 [2024-07-24 17:26:59.467052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:13.373 [2024-07-24 17:26:59.467084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.185 ms 00:26:13.373 [2024-07-24 17:26:59.467101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.373 [2024-07-24 17:26:59.467264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.373 [2024-07-24 17:26:59.467282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:13.373 [2024-07-24 17:26:59.467294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:26:13.373 [2024-07-24 17:26:59.467303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.373 [2024-07-24 17:26:59.492347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.373 [2024-07-24 17:26:59.492383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:13.373 [2024-07-24 17:26:59.492413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.024 ms 00:26:13.373 [2024-07-24 17:26:59.492422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.374 [2024-07-24 17:26:59.516716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.374 [2024-07-24 17:26:59.516752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:13.374 [2024-07-24 17:26:59.516782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.268 ms 00:26:13.374 [2024-07-24 17:26:59.516791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.374 [2024-07-24 17:26:59.543323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.374 [2024-07-24 17:26:59.543360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:13.374 [2024-07-24 17:26:59.543390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.507 ms 00:26:13.374 [2024-07-24 17:26:59.543399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.374 [2024-07-24 17:26:59.570274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.374 [2024-07-24 17:26:59.570312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:13.374 [2024-07-24 17:26:59.570343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.808 ms 00:26:13.374 [2024-07-24 17:26:59.570368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.374 [2024-07-24 17:26:59.570407] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:13.374 [2024-07-24 17:26:59.570432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.570990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:13.374 [2024-07-24 17:26:59.571368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:13.375 [2024-07-24 17:26:59.571605] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:13.375 [2024-07-24 17:26:59.571616] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1ed9e678-873d-4fa8-9ddb-2bec19870f6b 00:26:13.375 [2024-07-24 17:26:59.571627] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:13.375 [2024-07-24 17:26:59.571636] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:13.375 [2024-07-24 17:26:59.571673] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:13.375 [2024-07-24 17:26:59.571684] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:13.375 [2024-07-24 17:26:59.571705] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:13.375 [2024-07-24 17:26:59.571718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:13.375 [2024-07-24 17:26:59.571728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:13.375 [2024-07-24 17:26:59.571738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:13.375 [2024-07-24 17:26:59.571747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:13.375 [2024-07-24 17:26:59.571757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.375 [2024-07-24 17:26:59.571768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:13.375 [2024-07-24 17:26:59.571784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.352 ms 00:26:13.375 [2024-07-24 17:26:59.571794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.375 [2024-07-24 17:26:59.587087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.375 [2024-07-24 17:26:59.587125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:13.375 [2024-07-24 17:26:59.587141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.268 ms 00:26:13.375 [2024-07-24 17:26:59.587152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.375 [2024-07-24 17:26:59.587597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.375 [2024-07-24 17:26:59.587623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:13.375 [2024-07-24 17:26:59.587636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:26:13.375 [2024-07-24 17:26:59.587684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.634 [2024-07-24 17:26:59.624979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.634 [2024-07-24 17:26:59.625033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:13.634 [2024-07-24 17:26:59.625065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.634 [2024-07-24 17:26:59.625076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.634 [2024-07-24 17:26:59.625200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.634 [2024-07-24 17:26:59.625222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:13.634 [2024-07-24 17:26:59.625235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.634 [2024-07-24 17:26:59.625245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.634 [2024-07-24 17:26:59.625303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.634 [2024-07-24 17:26:59.625322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:13.634 [2024-07-24 17:26:59.625334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.634 [2024-07-24 17:26:59.625345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.634 [2024-07-24 17:26:59.625370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.634 [2024-07-24 17:26:59.625384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:13.634 [2024-07-24 17:26:59.625417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.634 [2024-07-24 17:26:59.625442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.634 [2024-07-24 17:26:59.716774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.634 [2024-07-24 17:26:59.716833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:13.634 [2024-07-24 17:26:59.716866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.634 [2024-07-24 17:26:59.716877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.634 [2024-07-24 17:26:59.799041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.634 [2024-07-24 17:26:59.799114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:13.634 [2024-07-24 17:26:59.799134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.634 [2024-07-24 17:26:59.799145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.634 [2024-07-24 17:26:59.799280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.634 [2024-07-24 17:26:59.799297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:13.634 [2024-07-24 17:26:59.799308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.634 [2024-07-24 17:26:59.799318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.634 [2024-07-24 17:26:59.799352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.634 [2024-07-24 17:26:59.799364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:13.634 [2024-07-24 17:26:59.799375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.634 [2024-07-24 17:26:59.799391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.634 [2024-07-24 17:26:59.799501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.634 [2024-07-24 17:26:59.799519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:13.634 [2024-07-24 17:26:59.799530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.634 [2024-07-24 17:26:59.799540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.634 [2024-07-24 17:26:59.799586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.634 [2024-07-24 17:26:59.799602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:13.634 [2024-07-24 17:26:59.799614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.634 [2024-07-24 17:26:59.799623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.634 [2024-07-24 17:26:59.799733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.634 [2024-07-24 17:26:59.799751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:13.634 [2024-07-24 17:26:59.799763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.634 [2024-07-24 17:26:59.799773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.634 [2024-07-24 17:26:59.799845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.634 [2024-07-24 17:26:59.799862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:13.634 [2024-07-24 17:26:59.799874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.634 [2024-07-24 17:26:59.799891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.634 [2024-07-24 17:26:59.800073] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 386.212 ms, result 0 00:26:15.009 00:26:15.009 00:26:15.009 17:27:00 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:26:15.009 17:27:00 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:26:15.267 17:27:01 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:15.526 [2024-07-24 17:27:01.560637] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:26:15.526 [2024-07-24 17:27:01.560819] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79825 ] 00:26:15.526 [2024-07-24 17:27:01.721465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.784 [2024-07-24 17:27:01.951491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.073 [2024-07-24 17:27:02.297986] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:16.073 [2024-07-24 17:27:02.298071] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:16.333 [2024-07-24 17:27:02.462815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.333 [2024-07-24 17:27:02.462869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:16.333 [2024-07-24 17:27:02.462904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:16.333 [2024-07-24 17:27:02.462914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.333 [2024-07-24 17:27:02.466319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.333 [2024-07-24 17:27:02.466370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:16.333 [2024-07-24 17:27:02.466401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.353 ms 00:26:16.333 [2024-07-24 17:27:02.466411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.333 [2024-07-24 17:27:02.466536] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:16.333 [2024-07-24 17:27:02.467576] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:16.333 [2024-07-24 17:27:02.467613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.333 [2024-07-24 17:27:02.467657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:16.333 [2024-07-24 17:27:02.467669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.087 ms 00:26:16.333 [2024-07-24 17:27:02.467712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.333 [2024-07-24 17:27:02.470049] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:16.333 [2024-07-24 17:27:02.485873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.333 [2024-07-24 17:27:02.485912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:16.333 [2024-07-24 17:27:02.485950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.834 ms 00:26:16.333 [2024-07-24 17:27:02.485960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.333 [2024-07-24 17:27:02.486071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.333 [2024-07-24 17:27:02.486091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:16.333 [2024-07-24 17:27:02.486103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:16.333 [2024-07-24 17:27:02.486113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.333 [2024-07-24 17:27:02.495164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.333 [2024-07-24 17:27:02.495206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:16.333 [2024-07-24 17:27:02.495223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.999 ms 00:26:16.333 [2024-07-24 17:27:02.495234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.333 [2024-07-24 17:27:02.495380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.333 [2024-07-24 17:27:02.495399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:16.333 [2024-07-24 17:27:02.495411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:26:16.333 [2024-07-24 17:27:02.495421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.333 [2024-07-24 17:27:02.495475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.333 [2024-07-24 17:27:02.495488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:16.333 [2024-07-24 17:27:02.495502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:16.333 [2024-07-24 17:27:02.495512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.333 [2024-07-24 17:27:02.495540] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:16.333 [2024-07-24 17:27:02.500483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.333 [2024-07-24 17:27:02.500518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:16.333 [2024-07-24 17:27:02.500548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.950 ms 00:26:16.333 [2024-07-24 17:27:02.500559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.333 [2024-07-24 17:27:02.500641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.333 [2024-07-24 17:27:02.500694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:16.333 [2024-07-24 17:27:02.500710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:16.333 [2024-07-24 17:27:02.500719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.333 [2024-07-24 17:27:02.500752] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:16.333 [2024-07-24 17:27:02.500781] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:16.333 [2024-07-24 17:27:02.500822] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:16.333 [2024-07-24 17:27:02.500842] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:16.333 [2024-07-24 17:27:02.500968] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:16.333 [2024-07-24 17:27:02.500984] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:16.333 [2024-07-24 17:27:02.500998] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:16.333 [2024-07-24 17:27:02.501011] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:16.333 [2024-07-24 17:27:02.501024] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:16.333 [2024-07-24 17:27:02.501056] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:16.333 [2024-07-24 17:27:02.501066] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:16.333 [2024-07-24 17:27:02.501077] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:16.333 [2024-07-24 17:27:02.501087] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:16.333 [2024-07-24 17:27:02.501098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.333 [2024-07-24 17:27:02.501109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:16.333 [2024-07-24 17:27:02.501120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:26:16.333 [2024-07-24 17:27:02.501130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.333 [2024-07-24 17:27:02.501216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.333 [2024-07-24 17:27:02.501230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:16.333 [2024-07-24 17:27:02.501246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:16.333 [2024-07-24 17:27:02.501270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.333 [2024-07-24 17:27:02.501379] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:16.333 [2024-07-24 17:27:02.501393] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:16.334 [2024-07-24 17:27:02.501404] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:16.334 [2024-07-24 17:27:02.501414] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.334 [2024-07-24 17:27:02.501424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:16.334 [2024-07-24 17:27:02.501433] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:16.334 [2024-07-24 17:27:02.501443] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:16.334 [2024-07-24 17:27:02.501452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:16.334 [2024-07-24 17:27:02.501462] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:16.334 [2024-07-24 17:27:02.501471] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:16.334 [2024-07-24 17:27:02.501480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:16.334 [2024-07-24 17:27:02.501489] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:16.334 [2024-07-24 17:27:02.501498] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:16.334 [2024-07-24 17:27:02.501507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:16.334 [2024-07-24 17:27:02.501517] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:16.334 [2024-07-24 17:27:02.501526] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.334 [2024-07-24 17:27:02.501536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:16.334 [2024-07-24 17:27:02.501546] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:16.334 [2024-07-24 17:27:02.501567] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.334 [2024-07-24 17:27:02.501577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:16.334 [2024-07-24 17:27:02.501586] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:16.334 [2024-07-24 17:27:02.501596] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.334 [2024-07-24 17:27:02.501605] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:16.334 [2024-07-24 17:27:02.501614] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:16.334 [2024-07-24 17:27:02.501623] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.334 [2024-07-24 17:27:02.501633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:16.334 [2024-07-24 17:27:02.501642] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:16.334 [2024-07-24 17:27:02.501662] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.334 [2024-07-24 17:27:02.501671] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:16.334 [2024-07-24 17:27:02.501680] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:16.334 [2024-07-24 17:27:02.501691] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.334 [2024-07-24 17:27:02.501700] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:16.334 [2024-07-24 17:27:02.501709] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:16.334 [2024-07-24 17:27:02.501718] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:16.334 [2024-07-24 17:27:02.501727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:16.334 [2024-07-24 17:27:02.502031] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:16.334 [2024-07-24 17:27:02.502073] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:16.334 [2024-07-24 17:27:02.502106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:16.334 [2024-07-24 17:27:02.502261] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:16.334 [2024-07-24 17:27:02.502318] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.334 [2024-07-24 17:27:02.502371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:16.334 [2024-07-24 17:27:02.502422] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:16.334 [2024-07-24 17:27:02.502467] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.334 [2024-07-24 17:27:02.502500] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:16.334 [2024-07-24 17:27:02.502533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:16.334 [2024-07-24 17:27:02.502566] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:16.334 [2024-07-24 17:27:02.502598] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.334 [2024-07-24 17:27:02.502697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:16.334 [2024-07-24 17:27:02.502738] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:16.334 [2024-07-24 17:27:02.502771] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:16.334 [2024-07-24 17:27:02.502804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:16.334 [2024-07-24 17:27:02.502837] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:16.334 [2024-07-24 17:27:02.502870] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:16.334 [2024-07-24 17:27:02.502999] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:16.334 [2024-07-24 17:27:02.503070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:16.334 [2024-07-24 17:27:02.503210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:16.334 [2024-07-24 17:27:02.503297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:16.334 [2024-07-24 17:27:02.503396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:16.334 [2024-07-24 17:27:02.503508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:16.334 [2024-07-24 17:27:02.503529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:16.334 [2024-07-24 17:27:02.503541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:16.334 [2024-07-24 17:27:02.503552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:16.334 [2024-07-24 17:27:02.503564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:16.334 [2024-07-24 17:27:02.503575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:16.334 [2024-07-24 17:27:02.503586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:16.334 [2024-07-24 17:27:02.503596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:16.334 [2024-07-24 17:27:02.503607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:16.334 [2024-07-24 17:27:02.503617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:16.334 [2024-07-24 17:27:02.503628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:16.334 [2024-07-24 17:27:02.503639] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:16.334 [2024-07-24 17:27:02.503685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:16.334 [2024-07-24 17:27:02.503699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:16.334 [2024-07-24 17:27:02.503711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:16.334 [2024-07-24 17:27:02.503722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:16.334 [2024-07-24 17:27:02.503734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:16.334 [2024-07-24 17:27:02.503747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.334 [2024-07-24 17:27:02.503759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:16.334 [2024-07-24 17:27:02.503772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.423 ms 00:26:16.334 [2024-07-24 17:27:02.503783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.334 [2024-07-24 17:27:02.549304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.334 [2024-07-24 17:27:02.549639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:16.334 [2024-07-24 17:27:02.549776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.437 ms 00:26:16.334 [2024-07-24 17:27:02.549839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.334 [2024-07-24 17:27:02.550130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.334 [2024-07-24 17:27:02.550324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:16.334 [2024-07-24 17:27:02.550463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:26:16.334 [2024-07-24 17:27:02.550512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.592 [2024-07-24 17:27:02.588393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.592 [2024-07-24 17:27:02.588630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:16.592 [2024-07-24 17:27:02.588812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.750 ms 00:26:16.592 [2024-07-24 17:27:02.588836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.592 [2024-07-24 17:27:02.588970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.592 [2024-07-24 17:27:02.588990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:16.592 [2024-07-24 17:27:02.589003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:16.593 [2024-07-24 17:27:02.589015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.593 [2024-07-24 17:27:02.589645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.593 [2024-07-24 17:27:02.589661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:16.593 [2024-07-24 17:27:02.589684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:26:16.593 [2024-07-24 17:27:02.589698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.593 [2024-07-24 17:27:02.589851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.593 [2024-07-24 17:27:02.589867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:16.593 [2024-07-24 17:27:02.589879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:26:16.593 [2024-07-24 17:27:02.589889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.593 [2024-07-24 17:27:02.607940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.593 [2024-07-24 17:27:02.607979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:16.593 [2024-07-24 17:27:02.608026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.025 ms 00:26:16.593 [2024-07-24 17:27:02.608037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.593 [2024-07-24 17:27:02.624168] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:26:16.593 [2024-07-24 17:27:02.624210] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:16.593 [2024-07-24 17:27:02.624244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.593 [2024-07-24 17:27:02.624255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:16.593 [2024-07-24 17:27:02.624283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.064 ms 00:26:16.593 [2024-07-24 17:27:02.624294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.593 [2024-07-24 17:27:02.651203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.593 [2024-07-24 17:27:02.651250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:16.593 [2024-07-24 17:27:02.651284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.818 ms 00:26:16.593 [2024-07-24 17:27:02.651296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.593 [2024-07-24 17:27:02.664444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.593 [2024-07-24 17:27:02.664482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:16.593 [2024-07-24 17:27:02.664512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.064 ms 00:26:16.593 [2024-07-24 17:27:02.664521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.593 [2024-07-24 17:27:02.678477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.593 [2024-07-24 17:27:02.678516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:16.593 [2024-07-24 17:27:02.678547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.875 ms 00:26:16.593 [2024-07-24 17:27:02.678557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.593 [2024-07-24 17:27:02.679481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.593 [2024-07-24 17:27:02.679513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:16.593 [2024-07-24 17:27:02.679543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:26:16.593 [2024-07-24 17:27:02.679553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.593 [2024-07-24 17:27:02.751869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.593 [2024-07-24 17:27:02.751942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:16.593 [2024-07-24 17:27:02.751977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.284 ms 00:26:16.593 [2024-07-24 17:27:02.751988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.593 [2024-07-24 17:27:02.762452] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:16.593 [2024-07-24 17:27:02.782263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.593 [2024-07-24 17:27:02.782328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:16.593 [2024-07-24 17:27:02.782362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.125 ms 00:26:16.593 [2024-07-24 17:27:02.782373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.593 [2024-07-24 17:27:02.782510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.593 [2024-07-24 17:27:02.782528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:16.593 [2024-07-24 17:27:02.782540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:16.593 [2024-07-24 17:27:02.782550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.593 [2024-07-24 17:27:02.782620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.593 [2024-07-24 17:27:02.782635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:16.593 [2024-07-24 17:27:02.782647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:26:16.593 [2024-07-24 17:27:02.782656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.593 [2024-07-24 17:27:02.782751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.593 [2024-07-24 17:27:02.782771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:16.593 [2024-07-24 17:27:02.782783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:16.593 [2024-07-24 17:27:02.782794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.593 [2024-07-24 17:27:02.782849] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:16.593 [2024-07-24 17:27:02.782870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.593 [2024-07-24 17:27:02.782882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:16.593 [2024-07-24 17:27:02.782898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:16.593 [2024-07-24 17:27:02.782913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.593 [2024-07-24 17:27:02.810804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.593 [2024-07-24 17:27:02.810848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:16.593 [2024-07-24 17:27:02.810880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.805 ms 00:26:16.593 [2024-07-24 17:27:02.810891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.593 [2024-07-24 17:27:02.811041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.593 [2024-07-24 17:27:02.811062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:16.593 [2024-07-24 17:27:02.811076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:16.593 [2024-07-24 17:27:02.811087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.593 [2024-07-24 17:27:02.812443] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:16.593 [2024-07-24 17:27:02.816168] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 349.209 ms, result 0 00:26:16.593 [2024-07-24 17:27:02.817154] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:16.852 [2024-07-24 17:27:02.831375] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:16.852  Copying: 4096/4096 [kB] (average 21 MBps)[2024-07-24 17:27:03.016967] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:16.852 [2024-07-24 17:27:03.027616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.852 [2024-07-24 17:27:03.027682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:16.852 [2024-07-24 17:27:03.027716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:16.852 [2024-07-24 17:27:03.027726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.852 [2024-07-24 17:27:03.027760] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:16.852 [2024-07-24 17:27:03.031125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.852 [2024-07-24 17:27:03.031161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:16.852 [2024-07-24 17:27:03.031176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.346 ms 00:26:16.852 [2024-07-24 17:27:03.031187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.852 [2024-07-24 17:27:03.033073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.852 [2024-07-24 17:27:03.033137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:16.852 [2024-07-24 17:27:03.033168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.857 ms 00:26:16.852 [2024-07-24 17:27:03.033178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.852 [2024-07-24 17:27:03.036938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.852 [2024-07-24 17:27:03.036975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:16.852 [2024-07-24 17:27:03.037013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.739 ms 00:26:16.852 [2024-07-24 17:27:03.037024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.852 [2024-07-24 17:27:03.043649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.852 [2024-07-24 17:27:03.043706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:16.852 [2024-07-24 17:27:03.043737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.600 ms 00:26:16.852 [2024-07-24 17:27:03.043747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.852 [2024-07-24 17:27:03.069871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.852 [2024-07-24 17:27:03.069907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:16.852 [2024-07-24 17:27:03.069938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.034 ms 00:26:16.852 [2024-07-24 17:27:03.069947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.852 [2024-07-24 17:27:03.085711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.852 [2024-07-24 17:27:03.085749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:16.852 [2024-07-24 17:27:03.085779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.723 ms 00:26:16.852 [2024-07-24 17:27:03.085798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.852 [2024-07-24 17:27:03.085936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.852 [2024-07-24 17:27:03.085954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:16.852 [2024-07-24 17:27:03.085966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:26:16.852 [2024-07-24 17:27:03.085975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.111 [2024-07-24 17:27:03.112862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.111 [2024-07-24 17:27:03.112900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:17.111 [2024-07-24 17:27:03.112930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.868 ms 00:26:17.111 [2024-07-24 17:27:03.112940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.111 [2024-07-24 17:27:03.139432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.111 [2024-07-24 17:27:03.139480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:17.111 [2024-07-24 17:27:03.139512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.451 ms 00:26:17.111 [2024-07-24 17:27:03.139521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.111 [2024-07-24 17:27:03.165773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.111 [2024-07-24 17:27:03.165811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:17.111 [2024-07-24 17:27:03.165842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.211 ms 00:26:17.111 [2024-07-24 17:27:03.165851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.111 [2024-07-24 17:27:03.192419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.111 [2024-07-24 17:27:03.192457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:17.111 [2024-07-24 17:27:03.192488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.503 ms 00:26:17.111 [2024-07-24 17:27:03.192497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.111 [2024-07-24 17:27:03.192539] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:17.111 [2024-07-24 17:27:03.192559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.192992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:17.112 [2024-07-24 17:27:03.193620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:17.113 [2024-07-24 17:27:03.193631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:17.113 [2024-07-24 17:27:03.193643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:17.113 [2024-07-24 17:27:03.193654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:17.113 [2024-07-24 17:27:03.193665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:17.113 [2024-07-24 17:27:03.193676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:17.113 [2024-07-24 17:27:03.193688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:17.113 [2024-07-24 17:27:03.193699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:17.113 [2024-07-24 17:27:03.193711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:17.113 [2024-07-24 17:27:03.193722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:17.113 [2024-07-24 17:27:03.193733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:17.113 [2024-07-24 17:27:03.193755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:17.113 [2024-07-24 17:27:03.193768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:17.113 [2024-07-24 17:27:03.193780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:17.113 [2024-07-24 17:27:03.193799] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:17.113 [2024-07-24 17:27:03.193811] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1ed9e678-873d-4fa8-9ddb-2bec19870f6b 00:26:17.113 [2024-07-24 17:27:03.193822] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:17.113 [2024-07-24 17:27:03.193832] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:17.113 [2024-07-24 17:27:03.193856] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:17.113 [2024-07-24 17:27:03.193868] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:17.113 [2024-07-24 17:27:03.193878] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:17.113 [2024-07-24 17:27:03.193889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:17.113 [2024-07-24 17:27:03.193899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:17.113 [2024-07-24 17:27:03.193909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:17.113 [2024-07-24 17:27:03.193919] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:17.113 [2024-07-24 17:27:03.193929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.113 [2024-07-24 17:27:03.193940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:17.113 [2024-07-24 17:27:03.193956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.391 ms 00:26:17.113 [2024-07-24 17:27:03.193966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.113 [2024-07-24 17:27:03.209349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.113 [2024-07-24 17:27:03.209385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:17.113 [2024-07-24 17:27:03.209416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.358 ms 00:26:17.113 [2024-07-24 17:27:03.209426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.113 [2024-07-24 17:27:03.209968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.113 [2024-07-24 17:27:03.209996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:17.113 [2024-07-24 17:27:03.210025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:26:17.113 [2024-07-24 17:27:03.210037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.113 [2024-07-24 17:27:03.245920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.113 [2024-07-24 17:27:03.245960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:17.113 [2024-07-24 17:27:03.245992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.113 [2024-07-24 17:27:03.246002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.113 [2024-07-24 17:27:03.246099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.113 [2024-07-24 17:27:03.246114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:17.113 [2024-07-24 17:27:03.246125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.113 [2024-07-24 17:27:03.246135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.113 [2024-07-24 17:27:03.246185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.113 [2024-07-24 17:27:03.246200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:17.113 [2024-07-24 17:27:03.246211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.113 [2024-07-24 17:27:03.246221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.113 [2024-07-24 17:27:03.246242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.113 [2024-07-24 17:27:03.246259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:17.113 [2024-07-24 17:27:03.246270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.113 [2024-07-24 17:27:03.246279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.113 [2024-07-24 17:27:03.341955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.113 [2024-07-24 17:27:03.342031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:17.113 [2024-07-24 17:27:03.342067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.113 [2024-07-24 17:27:03.342080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.371 [2024-07-24 17:27:03.428148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.371 [2024-07-24 17:27:03.428557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:17.371 [2024-07-24 17:27:03.428674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.371 [2024-07-24 17:27:03.428764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.371 [2024-07-24 17:27:03.428915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.371 [2024-07-24 17:27:03.429106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:17.371 [2024-07-24 17:27:03.429215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.371 [2024-07-24 17:27:03.429289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.371 [2024-07-24 17:27:03.429375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.371 [2024-07-24 17:27:03.429576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:17.371 [2024-07-24 17:27:03.429747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.371 [2024-07-24 17:27:03.429865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.371 [2024-07-24 17:27:03.430066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.371 [2024-07-24 17:27:03.430174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:17.371 [2024-07-24 17:27:03.430384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.371 [2024-07-24 17:27:03.430491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.371 [2024-07-24 17:27:03.430646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.371 [2024-07-24 17:27:03.430739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:17.371 [2024-07-24 17:27:03.430863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.371 [2024-07-24 17:27:03.430961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.371 [2024-07-24 17:27:03.431262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.371 [2024-07-24 17:27:03.431361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:17.371 [2024-07-24 17:27:03.431425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.371 [2024-07-24 17:27:03.431493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.371 [2024-07-24 17:27:03.431612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.371 [2024-07-24 17:27:03.431795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:17.371 [2024-07-24 17:27:03.431914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.371 [2024-07-24 17:27:03.431985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.371 [2024-07-24 17:27:03.432216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 404.579 ms, result 0 00:26:18.304 00:26:18.304 00:26:18.304 17:27:04 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79859 00:26:18.304 17:27:04 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79859 00:26:18.304 17:27:04 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:26:18.304 17:27:04 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 79859 ']' 00:26:18.304 17:27:04 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.304 17:27:04 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:18.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.304 17:27:04 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.304 17:27:04 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:18.304 17:27:04 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:26:18.563 [2024-07-24 17:27:04.565278] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:26:18.563 [2024-07-24 17:27:04.565471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79859 ] 00:26:18.563 [2024-07-24 17:27:04.735797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.821 [2024-07-24 17:27:04.939618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.757 17:27:05 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:19.757 17:27:05 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:26:19.757 17:27:05 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:26:19.757 [2024-07-24 17:27:05.882866] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:19.757 [2024-07-24 17:27:05.882964] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:20.017 [2024-07-24 17:27:06.062983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.017 [2024-07-24 17:27:06.063041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:20.017 [2024-07-24 17:27:06.063063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:20.017 [2024-07-24 17:27:06.063077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.017 [2024-07-24 17:27:06.066331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.017 [2024-07-24 17:27:06.066539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:20.017 [2024-07-24 17:27:06.066671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.226 ms 00:26:20.017 [2024-07-24 17:27:06.066731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.017 [2024-07-24 17:27:06.067106] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:20.017 [2024-07-24 17:27:06.068131] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:20.017 [2024-07-24 17:27:06.068344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.017 [2024-07-24 17:27:06.068373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:20.017 [2024-07-24 17:27:06.068388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:26:20.017 [2024-07-24 17:27:06.068404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.017 [2024-07-24 17:27:06.070547] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:20.017 [2024-07-24 17:27:06.086836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.017 [2024-07-24 17:27:06.086906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:20.017 [2024-07-24 17:27:06.086972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.284 ms 00:26:20.017 [2024-07-24 17:27:06.086985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.017 [2024-07-24 17:27:06.087103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.017 [2024-07-24 17:27:06.087125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:20.017 [2024-07-24 17:27:06.087141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:20.017 [2024-07-24 17:27:06.087153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.017 [2024-07-24 17:27:06.096465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.017 [2024-07-24 17:27:06.096507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:20.017 [2024-07-24 17:27:06.096531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.243 ms 00:26:20.017 [2024-07-24 17:27:06.096543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.017 [2024-07-24 17:27:06.096730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.017 [2024-07-24 17:27:06.096767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:20.017 [2024-07-24 17:27:06.096784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:26:20.017 [2024-07-24 17:27:06.096800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.017 [2024-07-24 17:27:06.096844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.017 [2024-07-24 17:27:06.096860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:20.017 [2024-07-24 17:27:06.096874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:20.017 [2024-07-24 17:27:06.096886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.017 [2024-07-24 17:27:06.096924] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:20.017 [2024-07-24 17:27:06.101618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.017 [2024-07-24 17:27:06.101688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:20.017 [2024-07-24 17:27:06.101704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.706 ms 00:26:20.017 [2024-07-24 17:27:06.101718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.017 [2024-07-24 17:27:06.101798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.017 [2024-07-24 17:27:06.101821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:20.017 [2024-07-24 17:27:06.101836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:20.017 [2024-07-24 17:27:06.101848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.017 [2024-07-24 17:27:06.101876] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:20.017 [2024-07-24 17:27:06.101904] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:20.017 [2024-07-24 17:27:06.101949] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:20.017 [2024-07-24 17:27:06.101974] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:20.017 [2024-07-24 17:27:06.102078] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:20.017 [2024-07-24 17:27:06.102100] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:20.017 [2024-07-24 17:27:06.102113] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:20.017 [2024-07-24 17:27:06.102129] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:20.017 [2024-07-24 17:27:06.102141] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:20.017 [2024-07-24 17:27:06.102155] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:20.017 [2024-07-24 17:27:06.102165] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:20.017 [2024-07-24 17:27:06.102177] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:20.017 [2024-07-24 17:27:06.102187] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:20.017 [2024-07-24 17:27:06.102203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.017 [2024-07-24 17:27:06.102213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:20.017 [2024-07-24 17:27:06.102226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:26:20.017 [2024-07-24 17:27:06.102238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.017 [2024-07-24 17:27:06.102321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.017 [2024-07-24 17:27:06.102334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:20.017 [2024-07-24 17:27:06.102347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:26:20.017 [2024-07-24 17:27:06.102357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.017 [2024-07-24 17:27:06.102461] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:20.017 [2024-07-24 17:27:06.102478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:20.017 [2024-07-24 17:27:06.102492] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:20.017 [2024-07-24 17:27:06.102503] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.017 [2024-07-24 17:27:06.102520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:20.017 [2024-07-24 17:27:06.102529] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:20.017 [2024-07-24 17:27:06.102541] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:20.017 [2024-07-24 17:27:06.102551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:20.017 [2024-07-24 17:27:06.102566] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:20.017 [2024-07-24 17:27:06.102575] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:20.017 [2024-07-24 17:27:06.102587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:20.017 [2024-07-24 17:27:06.102597] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:20.017 [2024-07-24 17:27:06.102609] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:20.017 [2024-07-24 17:27:06.102619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:20.017 [2024-07-24 17:27:06.102631] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:20.017 [2024-07-24 17:27:06.102641] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.017 [2024-07-24 17:27:06.102667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:20.017 [2024-07-24 17:27:06.102692] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:20.017 [2024-07-24 17:27:06.102707] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.017 [2024-07-24 17:27:06.102718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:20.017 [2024-07-24 17:27:06.102730] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:20.017 [2024-07-24 17:27:06.102740] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.017 [2024-07-24 17:27:06.102751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:20.017 [2024-07-24 17:27:06.102761] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:20.017 [2024-07-24 17:27:06.102775] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.017 [2024-07-24 17:27:06.102784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:20.017 [2024-07-24 17:27:06.102796] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:20.018 [2024-07-24 17:27:06.102815] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.018 [2024-07-24 17:27:06.102829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:20.018 [2024-07-24 17:27:06.102839] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:20.018 [2024-07-24 17:27:06.102851] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.018 [2024-07-24 17:27:06.102861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:20.018 [2024-07-24 17:27:06.102873] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:20.018 [2024-07-24 17:27:06.102883] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:20.018 [2024-07-24 17:27:06.102894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:20.018 [2024-07-24 17:27:06.102904] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:20.018 [2024-07-24 17:27:06.102915] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:20.018 [2024-07-24 17:27:06.102951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:20.018 [2024-07-24 17:27:06.102976] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:20.018 [2024-07-24 17:27:06.102986] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.018 [2024-07-24 17:27:06.103002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:20.018 [2024-07-24 17:27:06.103013] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:20.018 [2024-07-24 17:27:06.103026] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.018 [2024-07-24 17:27:06.103045] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:20.018 [2024-07-24 17:27:06.103060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:20.018 [2024-07-24 17:27:06.103071] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:20.018 [2024-07-24 17:27:06.103084] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.018 [2024-07-24 17:27:06.103095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:20.018 [2024-07-24 17:27:06.103108] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:20.018 [2024-07-24 17:27:06.103119] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:20.018 [2024-07-24 17:27:06.103132] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:20.018 [2024-07-24 17:27:06.103142] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:20.018 [2024-07-24 17:27:06.103154] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:20.018 [2024-07-24 17:27:06.103166] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:20.018 [2024-07-24 17:27:06.103183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:20.018 [2024-07-24 17:27:06.103196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:20.018 [2024-07-24 17:27:06.103213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:20.018 [2024-07-24 17:27:06.103242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:20.018 [2024-07-24 17:27:06.103287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:20.018 [2024-07-24 17:27:06.103297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:20.018 [2024-07-24 17:27:06.103308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:20.018 [2024-07-24 17:27:06.103319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:20.018 [2024-07-24 17:27:06.103330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:20.018 [2024-07-24 17:27:06.103340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:20.018 [2024-07-24 17:27:06.103352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:20.018 [2024-07-24 17:27:06.103362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:20.018 [2024-07-24 17:27:06.103374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:20.018 [2024-07-24 17:27:06.103383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:20.018 [2024-07-24 17:27:06.103395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:20.018 [2024-07-24 17:27:06.103405] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:20.018 [2024-07-24 17:27:06.103418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:20.018 [2024-07-24 17:27:06.103430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:20.018 [2024-07-24 17:27:06.103445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:20.018 [2024-07-24 17:27:06.103454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:20.018 [2024-07-24 17:27:06.103466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:20.018 [2024-07-24 17:27:06.103478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.018 [2024-07-24 17:27:06.103491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:20.018 [2024-07-24 17:27:06.103502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.074 ms 00:26:20.018 [2024-07-24 17:27:06.103516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.018 [2024-07-24 17:27:06.139333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.018 [2024-07-24 17:27:06.139411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:20.018 [2024-07-24 17:27:06.139435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.745 ms 00:26:20.018 [2024-07-24 17:27:06.139450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.018 [2024-07-24 17:27:06.139630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.018 [2024-07-24 17:27:06.139652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:20.018 [2024-07-24 17:27:06.139681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:20.018 [2024-07-24 17:27:06.139730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.018 [2024-07-24 17:27:06.180373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.018 [2024-07-24 17:27:06.180428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:20.018 [2024-07-24 17:27:06.180446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.612 ms 00:26:20.018 [2024-07-24 17:27:06.180459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.018 [2024-07-24 17:27:06.180566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.018 [2024-07-24 17:27:06.180586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:20.018 [2024-07-24 17:27:06.180598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:20.018 [2024-07-24 17:27:06.180610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.018 [2024-07-24 17:27:06.181290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.018 [2024-07-24 17:27:06.181318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:20.018 [2024-07-24 17:27:06.181332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:26:20.018 [2024-07-24 17:27:06.181345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.018 [2024-07-24 17:27:06.181515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.018 [2024-07-24 17:27:06.181536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:20.018 [2024-07-24 17:27:06.181548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:26:20.018 [2024-07-24 17:27:06.181560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.018 [2024-07-24 17:27:06.200828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.018 [2024-07-24 17:27:06.200875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:20.018 [2024-07-24 17:27:06.200892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.241 ms 00:26:20.018 [2024-07-24 17:27:06.200905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.018 [2024-07-24 17:27:06.216380] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:20.018 [2024-07-24 17:27:06.216439] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:20.018 [2024-07-24 17:27:06.216460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.018 [2024-07-24 17:27:06.216473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:20.018 [2024-07-24 17:27:06.216485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.430 ms 00:26:20.018 [2024-07-24 17:27:06.216497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.018 [2024-07-24 17:27:06.242359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.018 [2024-07-24 17:27:06.242403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:20.018 [2024-07-24 17:27:06.242420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.782 ms 00:26:20.018 [2024-07-24 17:27:06.242436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.277 [2024-07-24 17:27:06.255581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.277 [2024-07-24 17:27:06.255639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:20.277 [2024-07-24 17:27:06.255714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.063 ms 00:26:20.277 [2024-07-24 17:27:06.255732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.277 [2024-07-24 17:27:06.268544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.277 [2024-07-24 17:27:06.268600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:20.277 [2024-07-24 17:27:06.268616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.712 ms 00:26:20.277 [2024-07-24 17:27:06.268628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.277 [2024-07-24 17:27:06.269494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.277 [2024-07-24 17:27:06.269547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:20.277 [2024-07-24 17:27:06.269563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:26:20.277 [2024-07-24 17:27:06.269576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.277 [2024-07-24 17:27:06.344064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.277 [2024-07-24 17:27:06.344159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:20.277 [2024-07-24 17:27:06.344183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.457 ms 00:26:20.278 [2024-07-24 17:27:06.344197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.278 [2024-07-24 17:27:06.355503] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:20.278 [2024-07-24 17:27:06.375748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.278 [2024-07-24 17:27:06.376059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:20.278 [2024-07-24 17:27:06.376132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.410 ms 00:26:20.278 [2024-07-24 17:27:06.376148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.278 [2024-07-24 17:27:06.376291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.278 [2024-07-24 17:27:06.376312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:20.278 [2024-07-24 17:27:06.376329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:20.278 [2024-07-24 17:27:06.376341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.278 [2024-07-24 17:27:06.376434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.278 [2024-07-24 17:27:06.376451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:20.278 [2024-07-24 17:27:06.376483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:26:20.278 [2024-07-24 17:27:06.376496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.278 [2024-07-24 17:27:06.376546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.278 [2024-07-24 17:27:06.376559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:20.278 [2024-07-24 17:27:06.376574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:26:20.278 [2024-07-24 17:27:06.376585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.278 [2024-07-24 17:27:06.376629] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:20.278 [2024-07-24 17:27:06.376645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.278 [2024-07-24 17:27:06.376661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:20.278 [2024-07-24 17:27:06.376674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:20.278 [2024-07-24 17:27:06.376708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.278 [2024-07-24 17:27:06.404365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.278 [2024-07-24 17:27:06.404426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:20.278 [2024-07-24 17:27:06.404443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.625 ms 00:26:20.278 [2024-07-24 17:27:06.404457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.278 [2024-07-24 17:27:06.404570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.278 [2024-07-24 17:27:06.404597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:20.278 [2024-07-24 17:27:06.404612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:20.278 [2024-07-24 17:27:06.404625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.278 [2024-07-24 17:27:06.406170] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:20.278 [2024-07-24 17:27:06.410252] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 342.712 ms, result 0 00:26:20.278 [2024-07-24 17:27:06.411588] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:20.278 Some configs were skipped because the RPC state that can call them passed over. 00:26:20.278 17:27:06 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:26:20.536 [2024-07-24 17:27:06.700974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.536 [2024-07-24 17:27:06.701030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:26:20.536 [2024-07-24 17:27:06.701071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.583 ms 00:26:20.536 [2024-07-24 17:27:06.701083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.536 [2024-07-24 17:27:06.701133] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.751 ms, result 0 00:26:20.536 true 00:26:20.536 17:27:06 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:26:20.794 [2024-07-24 17:27:06.900750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.794 [2024-07-24 17:27:06.900818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:26:20.794 [2024-07-24 17:27:06.900850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.209 ms 00:26:20.794 [2024-07-24 17:27:06.900878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.794 [2024-07-24 17:27:06.900922] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.377 ms, result 0 00:26:20.794 true 00:26:20.794 17:27:06 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79859 00:26:20.794 17:27:06 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79859 ']' 00:26:20.794 17:27:06 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79859 00:26:20.794 17:27:06 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:26:20.794 17:27:06 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:20.794 17:27:06 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79859 00:26:20.794 17:27:06 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:20.794 17:27:06 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:20.794 killing process with pid 79859 00:26:20.794 17:27:06 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79859' 00:26:20.794 17:27:06 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 79859 00:26:20.794 17:27:06 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 79859 00:26:21.730 [2024-07-24 17:27:07.828158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.730 [2024-07-24 17:27:07.828234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:21.730 [2024-07-24 17:27:07.828256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:21.730 [2024-07-24 17:27:07.828269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.730 [2024-07-24 17:27:07.828300] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:21.730 [2024-07-24 17:27:07.831591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.730 [2024-07-24 17:27:07.831638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:21.730 [2024-07-24 17:27:07.831652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.271 ms 00:26:21.730 [2024-07-24 17:27:07.831690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.730 [2024-07-24 17:27:07.832007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.730 [2024-07-24 17:27:07.832034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:21.730 [2024-07-24 17:27:07.832062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:26:21.730 [2024-07-24 17:27:07.832074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.730 [2024-07-24 17:27:07.835866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.730 [2024-07-24 17:27:07.835926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:21.730 [2024-07-24 17:27:07.835941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.769 ms 00:26:21.730 [2024-07-24 17:27:07.835954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.730 [2024-07-24 17:27:07.842066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.730 [2024-07-24 17:27:07.842115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:21.730 [2024-07-24 17:27:07.842128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.069 ms 00:26:21.730 [2024-07-24 17:27:07.842141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.730 [2024-07-24 17:27:07.852584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.730 [2024-07-24 17:27:07.852636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:21.730 [2024-07-24 17:27:07.852651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.393 ms 00:26:21.730 [2024-07-24 17:27:07.852671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.730 [2024-07-24 17:27:07.860694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.730 [2024-07-24 17:27:07.860750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:21.730 [2024-07-24 17:27:07.860765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.982 ms 00:26:21.730 [2024-07-24 17:27:07.860777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.730 [2024-07-24 17:27:07.860923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.730 [2024-07-24 17:27:07.860945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:21.730 [2024-07-24 17:27:07.860957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:26:21.730 [2024-07-24 17:27:07.860979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.730 [2024-07-24 17:27:07.871706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.730 [2024-07-24 17:27:07.871757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:21.730 [2024-07-24 17:27:07.871771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.705 ms 00:26:21.730 [2024-07-24 17:27:07.871782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.730 [2024-07-24 17:27:07.882164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.730 [2024-07-24 17:27:07.882215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:21.730 [2024-07-24 17:27:07.882229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.328 ms 00:26:21.730 [2024-07-24 17:27:07.882245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.730 [2024-07-24 17:27:07.892384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.730 [2024-07-24 17:27:07.892447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:21.730 [2024-07-24 17:27:07.892461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.087 ms 00:26:21.730 [2024-07-24 17:27:07.892472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.730 [2024-07-24 17:27:07.902611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.730 [2024-07-24 17:27:07.902669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:21.730 [2024-07-24 17:27:07.902685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.060 ms 00:26:21.731 [2024-07-24 17:27:07.902697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.731 [2024-07-24 17:27:07.902748] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:21.731 [2024-07-24 17:27:07.902774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.902790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.902802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.902813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.902825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.902835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.902850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.902860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.902873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.902883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.902895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.902906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.902918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.902953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.902970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.902981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.902994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:21.731 [2024-07-24 17:27:07.903792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.903803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.903815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.903826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.903839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.903849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.903862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.903873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.903887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.903898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.903911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.903921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.903934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.903944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.903957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.903969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.903983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.903994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.904007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.904018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.904046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.904057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:21.732 [2024-07-24 17:27:07.904077] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:21.732 [2024-07-24 17:27:07.904087] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1ed9e678-873d-4fa8-9ddb-2bec19870f6b 00:26:21.732 [2024-07-24 17:27:07.904102] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:21.732 [2024-07-24 17:27:07.904112] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:21.732 [2024-07-24 17:27:07.904124] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:21.732 [2024-07-24 17:27:07.904134] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:21.732 [2024-07-24 17:27:07.904146] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:21.732 [2024-07-24 17:27:07.904156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:21.732 [2024-07-24 17:27:07.904168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:21.732 [2024-07-24 17:27:07.904177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:21.732 [2024-07-24 17:27:07.904199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:21.732 [2024-07-24 17:27:07.904209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.732 [2024-07-24 17:27:07.904221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:21.732 [2024-07-24 17:27:07.904232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.463 ms 00:26:21.732 [2024-07-24 17:27:07.904247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.732 [2024-07-24 17:27:07.918667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.732 [2024-07-24 17:27:07.918717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:21.732 [2024-07-24 17:27:07.918732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.385 ms 00:26:21.732 [2024-07-24 17:27:07.918747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.732 [2024-07-24 17:27:07.919227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.732 [2024-07-24 17:27:07.919299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:21.732 [2024-07-24 17:27:07.919325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:26:21.732 [2024-07-24 17:27:07.919339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.732 [2024-07-24 17:27:07.965850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.732 [2024-07-24 17:27:07.965909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:21.732 [2024-07-24 17:27:07.965923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.732 [2024-07-24 17:27:07.965936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.732 [2024-07-24 17:27:07.966051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.732 [2024-07-24 17:27:07.966073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:21.732 [2024-07-24 17:27:07.966088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.732 [2024-07-24 17:27:07.966100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.732 [2024-07-24 17:27:07.966152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.732 [2024-07-24 17:27:07.966172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:21.732 [2024-07-24 17:27:07.966183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.732 [2024-07-24 17:27:07.966198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.732 [2024-07-24 17:27:07.966220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.732 [2024-07-24 17:27:07.966235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:21.732 [2024-07-24 17:27:07.966246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.732 [2024-07-24 17:27:07.966261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.991 [2024-07-24 17:27:08.055142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.991 [2024-07-24 17:27:08.055207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:21.991 [2024-07-24 17:27:08.055226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.991 [2024-07-24 17:27:08.055241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.991 [2024-07-24 17:27:08.130868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.991 [2024-07-24 17:27:08.130959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:21.991 [2024-07-24 17:27:08.130990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.991 [2024-07-24 17:27:08.131020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.991 [2024-07-24 17:27:08.131143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.991 [2024-07-24 17:27:08.131165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:21.991 [2024-07-24 17:27:08.131178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.991 [2024-07-24 17:27:08.131195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.991 [2024-07-24 17:27:08.131232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.991 [2024-07-24 17:27:08.131249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:21.991 [2024-07-24 17:27:08.131275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.991 [2024-07-24 17:27:08.131302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.991 [2024-07-24 17:27:08.131419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.991 [2024-07-24 17:27:08.131441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:21.991 [2024-07-24 17:27:08.131454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.991 [2024-07-24 17:27:08.131466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.991 [2024-07-24 17:27:08.131514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.991 [2024-07-24 17:27:08.131535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:21.991 [2024-07-24 17:27:08.131547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.991 [2024-07-24 17:27:08.131559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.991 [2024-07-24 17:27:08.131608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.991 [2024-07-24 17:27:08.131625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:21.991 [2024-07-24 17:27:08.131636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.991 [2024-07-24 17:27:08.131651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.991 [2024-07-24 17:27:08.131742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.991 [2024-07-24 17:27:08.131781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:21.991 [2024-07-24 17:27:08.131794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.991 [2024-07-24 17:27:08.131808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.991 [2024-07-24 17:27:08.131974] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 303.797 ms, result 0 00:26:22.925 17:27:09 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:22.925 [2024-07-24 17:27:09.134523] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:26:22.925 [2024-07-24 17:27:09.134719] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79921 ] 00:26:23.182 [2024-07-24 17:27:09.309436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.440 [2024-07-24 17:27:09.506915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.697 [2024-07-24 17:27:09.832581] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:23.697 [2024-07-24 17:27:09.832701] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:23.977 [2024-07-24 17:27:09.993726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.977 [2024-07-24 17:27:09.993813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:23.977 [2024-07-24 17:27:09.993833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:23.977 [2024-07-24 17:27:09.993845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.977 [2024-07-24 17:27:09.997387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.977 [2024-07-24 17:27:09.997424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:23.977 [2024-07-24 17:27:09.997438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.513 ms 00:26:23.977 [2024-07-24 17:27:09.997448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.977 [2024-07-24 17:27:09.997582] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:23.977 [2024-07-24 17:27:09.998557] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:23.977 [2024-07-24 17:27:09.998592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.977 [2024-07-24 17:27:09.998605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:23.977 [2024-07-24 17:27:09.998617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:26:23.977 [2024-07-24 17:27:09.998628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.977 [2024-07-24 17:27:10.000757] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:23.977 [2024-07-24 17:27:10.018307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.977 [2024-07-24 17:27:10.018362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:23.977 [2024-07-24 17:27:10.018384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.551 ms 00:26:23.977 [2024-07-24 17:27:10.018396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.977 [2024-07-24 17:27:10.018517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.977 [2024-07-24 17:27:10.018538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:23.977 [2024-07-24 17:27:10.018551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:23.977 [2024-07-24 17:27:10.018561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.977 [2024-07-24 17:27:10.028517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.977 [2024-07-24 17:27:10.028568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:23.977 [2024-07-24 17:27:10.028582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.851 ms 00:26:23.977 [2024-07-24 17:27:10.028594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.977 [2024-07-24 17:27:10.028814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.977 [2024-07-24 17:27:10.028836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:23.977 [2024-07-24 17:27:10.028850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:23.977 [2024-07-24 17:27:10.028861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.977 [2024-07-24 17:27:10.028904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.977 [2024-07-24 17:27:10.028919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:23.977 [2024-07-24 17:27:10.028934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:23.977 [2024-07-24 17:27:10.028945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.977 [2024-07-24 17:27:10.028976] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:23.977 [2024-07-24 17:27:10.034253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.977 [2024-07-24 17:27:10.034287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:23.977 [2024-07-24 17:27:10.034303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.285 ms 00:26:23.977 [2024-07-24 17:27:10.034313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.977 [2024-07-24 17:27:10.034383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.977 [2024-07-24 17:27:10.034402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:23.977 [2024-07-24 17:27:10.034415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:23.977 [2024-07-24 17:27:10.034426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.977 [2024-07-24 17:27:10.034456] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:23.977 [2024-07-24 17:27:10.034501] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:23.977 [2024-07-24 17:27:10.034609] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:23.977 [2024-07-24 17:27:10.034630] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:23.977 [2024-07-24 17:27:10.034747] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:23.977 [2024-07-24 17:27:10.034764] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:23.977 [2024-07-24 17:27:10.034780] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:23.977 [2024-07-24 17:27:10.034795] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:23.978 [2024-07-24 17:27:10.034809] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:23.978 [2024-07-24 17:27:10.034828] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:23.978 [2024-07-24 17:27:10.034839] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:23.978 [2024-07-24 17:27:10.034850] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:23.978 [2024-07-24 17:27:10.034860] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:23.978 [2024-07-24 17:27:10.034872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.978 [2024-07-24 17:27:10.034883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:23.978 [2024-07-24 17:27:10.034895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:26:23.978 [2024-07-24 17:27:10.034905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.978 [2024-07-24 17:27:10.035020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.978 [2024-07-24 17:27:10.035037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:23.978 [2024-07-24 17:27:10.035054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:23.978 [2024-07-24 17:27:10.035065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.978 [2024-07-24 17:27:10.035173] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:23.978 [2024-07-24 17:27:10.035190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:23.978 [2024-07-24 17:27:10.035201] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:23.978 [2024-07-24 17:27:10.035213] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.978 [2024-07-24 17:27:10.035227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:23.978 [2024-07-24 17:27:10.035238] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:23.978 [2024-07-24 17:27:10.035248] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:23.978 [2024-07-24 17:27:10.035264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:23.978 [2024-07-24 17:27:10.035274] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:23.978 [2024-07-24 17:27:10.035293] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:23.978 [2024-07-24 17:27:10.035303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:23.978 [2024-07-24 17:27:10.035312] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:23.978 [2024-07-24 17:27:10.035322] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:23.978 [2024-07-24 17:27:10.035333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:23.978 [2024-07-24 17:27:10.035343] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:23.978 [2024-07-24 17:27:10.035353] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.978 [2024-07-24 17:27:10.035363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:23.978 [2024-07-24 17:27:10.035373] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:23.978 [2024-07-24 17:27:10.035397] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.978 [2024-07-24 17:27:10.035408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:23.978 [2024-07-24 17:27:10.035418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:23.978 [2024-07-24 17:27:10.035429] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.978 [2024-07-24 17:27:10.035438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:23.978 [2024-07-24 17:27:10.035449] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:23.978 [2024-07-24 17:27:10.035459] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.978 [2024-07-24 17:27:10.035471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:23.978 [2024-07-24 17:27:10.035481] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:23.978 [2024-07-24 17:27:10.035491] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.978 [2024-07-24 17:27:10.035500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:23.978 [2024-07-24 17:27:10.035511] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:23.978 [2024-07-24 17:27:10.035521] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.978 [2024-07-24 17:27:10.035531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:23.978 [2024-07-24 17:27:10.035541] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:23.978 [2024-07-24 17:27:10.035551] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:23.978 [2024-07-24 17:27:10.035561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:23.978 [2024-07-24 17:27:10.035572] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:23.978 [2024-07-24 17:27:10.035583] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:23.978 [2024-07-24 17:27:10.035593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:23.978 [2024-07-24 17:27:10.035603] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:23.978 [2024-07-24 17:27:10.035614] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.978 [2024-07-24 17:27:10.035624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:23.978 [2024-07-24 17:27:10.035635] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:23.978 [2024-07-24 17:27:10.035658] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.978 [2024-07-24 17:27:10.035671] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:23.978 [2024-07-24 17:27:10.035683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:23.978 [2024-07-24 17:27:10.035695] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:23.978 [2024-07-24 17:27:10.035712] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.978 [2024-07-24 17:27:10.035729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:23.978 [2024-07-24 17:27:10.035740] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:23.978 [2024-07-24 17:27:10.035750] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:23.978 [2024-07-24 17:27:10.035760] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:23.978 [2024-07-24 17:27:10.035771] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:23.978 [2024-07-24 17:27:10.035781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:23.978 [2024-07-24 17:27:10.035793] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:23.978 [2024-07-24 17:27:10.035806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:23.978 [2024-07-24 17:27:10.035819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:23.978 [2024-07-24 17:27:10.035831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:23.978 [2024-07-24 17:27:10.035841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:23.978 [2024-07-24 17:27:10.035852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:23.978 [2024-07-24 17:27:10.035863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:23.978 [2024-07-24 17:27:10.035874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:23.978 [2024-07-24 17:27:10.035885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:23.978 [2024-07-24 17:27:10.035896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:23.978 [2024-07-24 17:27:10.035907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:23.978 [2024-07-24 17:27:10.035918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:23.978 [2024-07-24 17:27:10.035929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:23.978 [2024-07-24 17:27:10.035940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:23.978 [2024-07-24 17:27:10.035951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:23.978 [2024-07-24 17:27:10.035963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:23.978 [2024-07-24 17:27:10.035974] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:23.978 [2024-07-24 17:27:10.035986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:23.978 [2024-07-24 17:27:10.035998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:23.978 [2024-07-24 17:27:10.036010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:23.978 [2024-07-24 17:27:10.036021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:23.978 [2024-07-24 17:27:10.036032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:23.978 [2024-07-24 17:27:10.036044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.978 [2024-07-24 17:27:10.036056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:23.978 [2024-07-24 17:27:10.036067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.934 ms 00:26:23.978 [2024-07-24 17:27:10.036077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.978 [2024-07-24 17:27:10.083132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.978 [2024-07-24 17:27:10.083196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:23.978 [2024-07-24 17:27:10.083259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.979 ms 00:26:23.978 [2024-07-24 17:27:10.083297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.978 [2024-07-24 17:27:10.083479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.978 [2024-07-24 17:27:10.083504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:23.978 [2024-07-24 17:27:10.083516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:26:23.979 [2024-07-24 17:27:10.083526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.979 [2024-07-24 17:27:10.122855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.979 [2024-07-24 17:27:10.122914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:23.979 [2024-07-24 17:27:10.122956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.299 ms 00:26:23.979 [2024-07-24 17:27:10.122973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.979 [2024-07-24 17:27:10.123103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.979 [2024-07-24 17:27:10.123143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:23.979 [2024-07-24 17:27:10.123157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:23.979 [2024-07-24 17:27:10.123183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.979 [2024-07-24 17:27:10.123818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.979 [2024-07-24 17:27:10.123865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:23.979 [2024-07-24 17:27:10.123878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:26:23.979 [2024-07-24 17:27:10.123896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.979 [2024-07-24 17:27:10.124059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.979 [2024-07-24 17:27:10.124111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:23.979 [2024-07-24 17:27:10.124124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:26:23.979 [2024-07-24 17:27:10.124133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.979 [2024-07-24 17:27:10.142098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.979 [2024-07-24 17:27:10.142148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:23.979 [2024-07-24 17:27:10.142162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.938 ms 00:26:23.979 [2024-07-24 17:27:10.142172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.979 [2024-07-24 17:27:10.158510] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:23.979 [2024-07-24 17:27:10.158577] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:23.979 [2024-07-24 17:27:10.158592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.979 [2024-07-24 17:27:10.158603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:23.979 [2024-07-24 17:27:10.158615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.287 ms 00:26:23.979 [2024-07-24 17:27:10.158624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.979 [2024-07-24 17:27:10.186486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.979 [2024-07-24 17:27:10.186521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:23.979 [2024-07-24 17:27:10.186536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.766 ms 00:26:23.979 [2024-07-24 17:27:10.186546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.979 [2024-07-24 17:27:10.200392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.979 [2024-07-24 17:27:10.200442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:23.979 [2024-07-24 17:27:10.200456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.761 ms 00:26:23.979 [2024-07-24 17:27:10.200465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.248 [2024-07-24 17:27:10.213934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.248 [2024-07-24 17:27:10.213997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:24.248 [2024-07-24 17:27:10.214012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.370 ms 00:26:24.248 [2024-07-24 17:27:10.214022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.248 [2024-07-24 17:27:10.214904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.248 [2024-07-24 17:27:10.214955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:24.248 [2024-07-24 17:27:10.214974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:26:24.248 [2024-07-24 17:27:10.214984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.248 [2024-07-24 17:27:10.284637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.248 [2024-07-24 17:27:10.284732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:24.248 [2024-07-24 17:27:10.284752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.620 ms 00:26:24.248 [2024-07-24 17:27:10.284764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.248 [2024-07-24 17:27:10.295759] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:24.248 [2024-07-24 17:27:10.314477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.248 [2024-07-24 17:27:10.314542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:24.248 [2024-07-24 17:27:10.314560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.561 ms 00:26:24.248 [2024-07-24 17:27:10.314571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.248 [2024-07-24 17:27:10.314757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.248 [2024-07-24 17:27:10.314779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:24.248 [2024-07-24 17:27:10.314792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:24.248 [2024-07-24 17:27:10.314802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.248 [2024-07-24 17:27:10.314906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.248 [2024-07-24 17:27:10.314956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:24.248 [2024-07-24 17:27:10.314969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:26:24.248 [2024-07-24 17:27:10.314980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.248 [2024-07-24 17:27:10.315015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.248 [2024-07-24 17:27:10.315037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:24.248 [2024-07-24 17:27:10.315049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:24.248 [2024-07-24 17:27:10.315074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.248 [2024-07-24 17:27:10.315117] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:24.248 [2024-07-24 17:27:10.315151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.248 [2024-07-24 17:27:10.315163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:24.248 [2024-07-24 17:27:10.315175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:24.248 [2024-07-24 17:27:10.315186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.248 [2024-07-24 17:27:10.344094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.248 [2024-07-24 17:27:10.344130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:24.248 [2024-07-24 17:27:10.344144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.858 ms 00:26:24.248 [2024-07-24 17:27:10.344155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.248 [2024-07-24 17:27:10.344271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.248 [2024-07-24 17:27:10.344290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:24.248 [2024-07-24 17:27:10.344302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:24.248 [2024-07-24 17:27:10.344311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.248 [2024-07-24 17:27:10.345610] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:24.248 [2024-07-24 17:27:10.349320] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 351.487 ms, result 0 00:26:24.248 [2024-07-24 17:27:10.350163] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:24.248 [2024-07-24 17:27:10.364831] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:36.229  Copying: 24/256 [MB] (24 MBps) Copying: 45/256 [MB] (21 MBps) Copying: 67/256 [MB] (21 MBps) Copying: 90/256 [MB] (22 MBps) Copying: 112/256 [MB] (22 MBps) Copying: 134/256 [MB] (21 MBps) Copying: 155/256 [MB] (21 MBps) Copying: 176/256 [MB] (21 MBps) Copying: 197/256 [MB] (21 MBps) Copying: 219/256 [MB] (21 MBps) Copying: 240/256 [MB] (21 MBps) Copying: 256/256 [MB] (average 21 MBps)[2024-07-24 17:27:22.310177] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:36.229 [2024-07-24 17:27:22.323715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.229 [2024-07-24 17:27:22.323772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:36.229 [2024-07-24 17:27:22.323807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:36.229 [2024-07-24 17:27:22.323829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.229 [2024-07-24 17:27:22.323858] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:36.229 [2024-07-24 17:27:22.327223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.229 [2024-07-24 17:27:22.327258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:36.229 [2024-07-24 17:27:22.327302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.345 ms 00:26:36.229 [2024-07-24 17:27:22.327313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.229 [2024-07-24 17:27:22.327616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.229 [2024-07-24 17:27:22.327633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:36.229 [2024-07-24 17:27:22.327645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:26:36.229 [2024-07-24 17:27:22.327655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.229 [2024-07-24 17:27:22.330844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.229 [2024-07-24 17:27:22.330891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:36.229 [2024-07-24 17:27:22.330920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.168 ms 00:26:36.229 [2024-07-24 17:27:22.330954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.229 [2024-07-24 17:27:22.336950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.229 [2024-07-24 17:27:22.336997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:36.229 [2024-07-24 17:27:22.337025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.970 ms 00:26:36.229 [2024-07-24 17:27:22.337035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.229 [2024-07-24 17:27:22.361919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.229 [2024-07-24 17:27:22.361974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:36.229 [2024-07-24 17:27:22.362005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.805 ms 00:26:36.229 [2024-07-24 17:27:22.362015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.229 [2024-07-24 17:27:22.376879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.229 [2024-07-24 17:27:22.376934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:36.229 [2024-07-24 17:27:22.376969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.836 ms 00:26:36.229 [2024-07-24 17:27:22.376980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.229 [2024-07-24 17:27:22.377127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.229 [2024-07-24 17:27:22.377162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:36.229 [2024-07-24 17:27:22.377190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:26:36.229 [2024-07-24 17:27:22.377201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.229 [2024-07-24 17:27:22.405388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.229 [2024-07-24 17:27:22.405428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:36.229 [2024-07-24 17:27:22.405457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.165 ms 00:26:36.229 [2024-07-24 17:27:22.405467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.229 [2024-07-24 17:27:22.432705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.229 [2024-07-24 17:27:22.432760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:36.229 [2024-07-24 17:27:22.432790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.196 ms 00:26:36.229 [2024-07-24 17:27:22.432799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.229 [2024-07-24 17:27:22.459698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.229 [2024-07-24 17:27:22.459747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:36.229 [2024-07-24 17:27:22.459780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.853 ms 00:26:36.229 [2024-07-24 17:27:22.459790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.489 [2024-07-24 17:27:22.485408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.489 [2024-07-24 17:27:22.485473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:36.489 [2024-07-24 17:27:22.485504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.540 ms 00:26:36.489 [2024-07-24 17:27:22.485513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.489 [2024-07-24 17:27:22.485557] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:36.489 [2024-07-24 17:27:22.485577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:36.489 [2024-07-24 17:27:22.485590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:36.489 [2024-07-24 17:27:22.485600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:36.489 [2024-07-24 17:27:22.485610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:36.489 [2024-07-24 17:27:22.485620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:36.489 [2024-07-24 17:27:22.485630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:36.489 [2024-07-24 17:27:22.485640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:36.489 [2024-07-24 17:27:22.485650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:36.489 [2024-07-24 17:27:22.485671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:36.489 [2024-07-24 17:27:22.485685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:36.489 [2024-07-24 17:27:22.485711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:36.489 [2024-07-24 17:27:22.485737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:36.489 [2024-07-24 17:27:22.485748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:36.489 [2024-07-24 17:27:22.485758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:36.489 [2024-07-24 17:27:22.485769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:36.489 [2024-07-24 17:27:22.485779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:36.489 [2024-07-24 17:27:22.485789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.485800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.485810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.485821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.485831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.485842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.485852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.485862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.485873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.485883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.485893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.485904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.485914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.485924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.485935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.485945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.485957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.485969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.485980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.485991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:36.490 [2024-07-24 17:27:22.486764] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:36.490 [2024-07-24 17:27:22.486774] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1ed9e678-873d-4fa8-9ddb-2bec19870f6b 00:26:36.490 [2024-07-24 17:27:22.486801] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:36.491 [2024-07-24 17:27:22.486811] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:36.491 [2024-07-24 17:27:22.486840] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:36.491 [2024-07-24 17:27:22.486850] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:36.491 [2024-07-24 17:27:22.486876] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:36.491 [2024-07-24 17:27:22.486887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:36.491 [2024-07-24 17:27:22.486898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:36.491 [2024-07-24 17:27:22.486907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:36.491 [2024-07-24 17:27:22.486917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:36.491 [2024-07-24 17:27:22.486938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.491 [2024-07-24 17:27:22.486979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:36.491 [2024-07-24 17:27:22.487000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:26:36.491 [2024-07-24 17:27:22.487011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.491 [2024-07-24 17:27:22.502672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.491 [2024-07-24 17:27:22.502757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:36.491 [2024-07-24 17:27:22.502790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.633 ms 00:26:36.491 [2024-07-24 17:27:22.502802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.491 [2024-07-24 17:27:22.503315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.491 [2024-07-24 17:27:22.503374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:36.491 [2024-07-24 17:27:22.503419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:26:36.491 [2024-07-24 17:27:22.503430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.491 [2024-07-24 17:27:22.541260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.491 [2024-07-24 17:27:22.541330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:36.491 [2024-07-24 17:27:22.541361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.491 [2024-07-24 17:27:22.541372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.491 [2024-07-24 17:27:22.541504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.491 [2024-07-24 17:27:22.541521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:36.491 [2024-07-24 17:27:22.541533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.491 [2024-07-24 17:27:22.541543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.491 [2024-07-24 17:27:22.541638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.491 [2024-07-24 17:27:22.541655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:36.491 [2024-07-24 17:27:22.541668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.491 [2024-07-24 17:27:22.541680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.491 [2024-07-24 17:27:22.541721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.491 [2024-07-24 17:27:22.541743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:36.491 [2024-07-24 17:27:22.541755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.491 [2024-07-24 17:27:22.541765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.491 [2024-07-24 17:27:22.630059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.491 [2024-07-24 17:27:22.630139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:36.491 [2024-07-24 17:27:22.630172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.491 [2024-07-24 17:27:22.630182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.491 [2024-07-24 17:27:22.702553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.491 [2024-07-24 17:27:22.702624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:36.491 [2024-07-24 17:27:22.702656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.491 [2024-07-24 17:27:22.702684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.491 [2024-07-24 17:27:22.702791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.491 [2024-07-24 17:27:22.702810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:36.491 [2024-07-24 17:27:22.702822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.491 [2024-07-24 17:27:22.702833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.491 [2024-07-24 17:27:22.702882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.491 [2024-07-24 17:27:22.702910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:36.491 [2024-07-24 17:27:22.702955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.491 [2024-07-24 17:27:22.702973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.491 [2024-07-24 17:27:22.703098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.491 [2024-07-24 17:27:22.703117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:36.491 [2024-07-24 17:27:22.703130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.491 [2024-07-24 17:27:22.703142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.491 [2024-07-24 17:27:22.703196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.491 [2024-07-24 17:27:22.703213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:36.491 [2024-07-24 17:27:22.703226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.491 [2024-07-24 17:27:22.703243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.491 [2024-07-24 17:27:22.703319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.491 [2024-07-24 17:27:22.703347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:36.491 [2024-07-24 17:27:22.703358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.491 [2024-07-24 17:27:22.703368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.491 [2024-07-24 17:27:22.703419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.491 [2024-07-24 17:27:22.703434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:36.491 [2024-07-24 17:27:22.703449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.491 [2024-07-24 17:27:22.703460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.491 [2024-07-24 17:27:22.703638] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 379.919 ms, result 0 00:26:37.426 00:26:37.426 00:26:37.684 17:27:23 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:38.251 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:26:38.251 17:27:24 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:26:38.251 17:27:24 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:26:38.251 17:27:24 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:38.251 17:27:24 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:38.251 17:27:24 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:26:38.251 17:27:24 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:26:38.251 17:27:24 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79859 00:26:38.251 17:27:24 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79859 ']' 00:26:38.251 17:27:24 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79859 00:26:38.251 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (79859) - No such process 00:26:38.251 Process with pid 79859 is not found 00:26:38.251 17:27:24 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 79859 is not found' 00:26:38.251 00:26:38.251 real 1m12.453s 00:26:38.251 user 1m35.904s 00:26:38.251 sys 0m7.001s 00:26:38.251 17:27:24 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:38.251 17:27:24 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:26:38.251 ************************************ 00:26:38.251 END TEST ftl_trim 00:26:38.251 ************************************ 00:26:38.251 17:27:24 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:26:38.251 17:27:24 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:38.251 17:27:24 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:38.251 17:27:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:38.251 ************************************ 00:26:38.251 START TEST ftl_restore 00:26:38.251 ************************************ 00:26:38.251 17:27:24 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:26:38.251 * Looking for test storage... 00:26:38.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.eb0XNxH6XT 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:38.251 17:27:24 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:26:38.252 17:27:24 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:26:38.252 17:27:24 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:26:38.252 17:27:24 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:38.252 17:27:24 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=80133 00:26:38.252 17:27:24 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 80133 00:26:38.252 17:27:24 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 80133 ']' 00:26:38.252 17:27:24 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:38.252 17:27:24 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.252 17:27:24 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:38.252 17:27:24 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.252 17:27:24 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:38.252 17:27:24 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:38.510 [2024-07-24 17:27:24.565395] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:26:38.510 [2024-07-24 17:27:24.565585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80133 ] 00:26:38.510 [2024-07-24 17:27:24.735400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.769 [2024-07-24 17:27:24.935499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.704 17:27:25 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:39.704 17:27:25 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:26:39.704 17:27:25 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:39.704 17:27:25 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:26:39.704 17:27:25 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:39.704 17:27:25 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:26:39.704 17:27:25 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:26:39.704 17:27:25 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:39.963 17:27:25 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:39.963 17:27:25 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:26:39.963 17:27:25 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:39.963 17:27:25 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:26:39.963 17:27:25 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:39.963 17:27:25 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:26:39.963 17:27:25 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:26:39.963 17:27:25 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:39.963 17:27:26 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:39.963 { 00:26:39.963 "name": "nvme0n1", 00:26:39.963 "aliases": [ 00:26:39.963 "c7194220-b242-435e-bbbf-e96023518fbb" 00:26:39.963 ], 00:26:39.963 "product_name": "NVMe disk", 00:26:39.963 "block_size": 4096, 00:26:39.963 "num_blocks": 1310720, 00:26:39.963 "uuid": "c7194220-b242-435e-bbbf-e96023518fbb", 00:26:39.963 "assigned_rate_limits": { 00:26:39.963 "rw_ios_per_sec": 0, 00:26:39.963 "rw_mbytes_per_sec": 0, 00:26:39.963 "r_mbytes_per_sec": 0, 00:26:39.963 "w_mbytes_per_sec": 0 00:26:39.963 }, 00:26:39.963 "claimed": true, 00:26:39.963 "claim_type": "read_many_write_one", 00:26:39.963 "zoned": false, 00:26:39.963 "supported_io_types": { 00:26:39.963 "read": true, 00:26:39.963 "write": true, 00:26:39.963 "unmap": true, 00:26:39.963 "flush": true, 00:26:39.963 "reset": true, 00:26:39.963 "nvme_admin": true, 00:26:39.963 "nvme_io": true, 00:26:39.963 "nvme_io_md": false, 00:26:39.963 "write_zeroes": true, 00:26:39.963 "zcopy": false, 00:26:39.963 "get_zone_info": false, 00:26:39.963 "zone_management": false, 00:26:39.963 "zone_append": false, 00:26:39.963 "compare": true, 00:26:39.963 "compare_and_write": false, 00:26:39.963 "abort": true, 00:26:39.963 "seek_hole": false, 00:26:39.963 "seek_data": false, 00:26:39.963 "copy": true, 00:26:39.963 "nvme_iov_md": false 00:26:39.963 }, 00:26:39.963 "driver_specific": { 00:26:39.963 "nvme": [ 00:26:39.963 { 00:26:39.963 "pci_address": "0000:00:11.0", 00:26:39.963 "trid": { 00:26:39.963 "trtype": "PCIe", 00:26:39.963 "traddr": "0000:00:11.0" 00:26:39.963 }, 00:26:39.963 "ctrlr_data": { 00:26:39.963 "cntlid": 0, 00:26:39.963 "vendor_id": "0x1b36", 00:26:39.963 "model_number": "QEMU NVMe Ctrl", 00:26:39.963 "serial_number": "12341", 00:26:39.963 "firmware_revision": "8.0.0", 00:26:39.963 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:39.963 "oacs": { 00:26:39.963 "security": 0, 00:26:39.963 "format": 1, 00:26:39.963 "firmware": 0, 00:26:39.963 "ns_manage": 1 00:26:39.963 }, 00:26:39.963 "multi_ctrlr": false, 00:26:39.963 "ana_reporting": false 00:26:39.963 }, 00:26:39.963 "vs": { 00:26:39.963 "nvme_version": "1.4" 00:26:39.963 }, 00:26:39.963 "ns_data": { 00:26:39.963 "id": 1, 00:26:39.963 "can_share": false 00:26:39.963 } 00:26:39.963 } 00:26:39.963 ], 00:26:39.963 "mp_policy": "active_passive" 00:26:39.963 } 00:26:39.963 } 00:26:39.963 ]' 00:26:39.963 17:27:26 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:40.222 17:27:26 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:26:40.222 17:27:26 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:40.222 17:27:26 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:26:40.222 17:27:26 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:26:40.222 17:27:26 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:26:40.222 17:27:26 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:26:40.222 17:27:26 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:40.222 17:27:26 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:26:40.222 17:27:26 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:40.222 17:27:26 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:40.480 17:27:26 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=503f0843-e8ee-4538-802c-a516385d8a08 00:26:40.480 17:27:26 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:26:40.480 17:27:26 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 503f0843-e8ee-4538-802c-a516385d8a08 00:26:40.738 17:27:26 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:40.997 17:27:27 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=fa41016a-79b3-4260-abe4-f968dfb83ea3 00:26:40.997 17:27:27 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fa41016a-79b3-4260-abe4-f968dfb83ea3 00:26:41.256 17:27:27 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=58048bed-e48b-4d6c-b586-981bb041199a 00:26:41.256 17:27:27 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:26:41.256 17:27:27 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 58048bed-e48b-4d6c-b586-981bb041199a 00:26:41.256 17:27:27 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:26:41.256 17:27:27 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:41.256 17:27:27 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=58048bed-e48b-4d6c-b586-981bb041199a 00:26:41.256 17:27:27 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:26:41.256 17:27:27 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 58048bed-e48b-4d6c-b586-981bb041199a 00:26:41.256 17:27:27 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=58048bed-e48b-4d6c-b586-981bb041199a 00:26:41.256 17:27:27 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:41.256 17:27:27 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:26:41.256 17:27:27 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:26:41.256 17:27:27 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 58048bed-e48b-4d6c-b586-981bb041199a 00:26:41.514 17:27:27 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:41.514 { 00:26:41.514 "name": "58048bed-e48b-4d6c-b586-981bb041199a", 00:26:41.514 "aliases": [ 00:26:41.514 "lvs/nvme0n1p0" 00:26:41.515 ], 00:26:41.515 "product_name": "Logical Volume", 00:26:41.515 "block_size": 4096, 00:26:41.515 "num_blocks": 26476544, 00:26:41.515 "uuid": "58048bed-e48b-4d6c-b586-981bb041199a", 00:26:41.515 "assigned_rate_limits": { 00:26:41.515 "rw_ios_per_sec": 0, 00:26:41.515 "rw_mbytes_per_sec": 0, 00:26:41.515 "r_mbytes_per_sec": 0, 00:26:41.515 "w_mbytes_per_sec": 0 00:26:41.515 }, 00:26:41.515 "claimed": false, 00:26:41.515 "zoned": false, 00:26:41.515 "supported_io_types": { 00:26:41.515 "read": true, 00:26:41.515 "write": true, 00:26:41.515 "unmap": true, 00:26:41.515 "flush": false, 00:26:41.515 "reset": true, 00:26:41.515 "nvme_admin": false, 00:26:41.515 "nvme_io": false, 00:26:41.515 "nvme_io_md": false, 00:26:41.515 "write_zeroes": true, 00:26:41.515 "zcopy": false, 00:26:41.515 "get_zone_info": false, 00:26:41.515 "zone_management": false, 00:26:41.515 "zone_append": false, 00:26:41.515 "compare": false, 00:26:41.515 "compare_and_write": false, 00:26:41.515 "abort": false, 00:26:41.515 "seek_hole": true, 00:26:41.515 "seek_data": true, 00:26:41.515 "copy": false, 00:26:41.515 "nvme_iov_md": false 00:26:41.515 }, 00:26:41.515 "driver_specific": { 00:26:41.515 "lvol": { 00:26:41.515 "lvol_store_uuid": "fa41016a-79b3-4260-abe4-f968dfb83ea3", 00:26:41.515 "base_bdev": "nvme0n1", 00:26:41.515 "thin_provision": true, 00:26:41.515 "num_allocated_clusters": 0, 00:26:41.515 "snapshot": false, 00:26:41.515 "clone": false, 00:26:41.515 "esnap_clone": false 00:26:41.515 } 00:26:41.515 } 00:26:41.515 } 00:26:41.515 ]' 00:26:41.515 17:27:27 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:41.515 17:27:27 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:26:41.515 17:27:27 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:41.515 17:27:27 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:41.515 17:27:27 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:41.515 17:27:27 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:26:41.515 17:27:27 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:26:41.515 17:27:27 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:26:41.515 17:27:27 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:41.774 17:27:27 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:41.774 17:27:27 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:41.774 17:27:27 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 58048bed-e48b-4d6c-b586-981bb041199a 00:26:41.774 17:27:27 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=58048bed-e48b-4d6c-b586-981bb041199a 00:26:41.774 17:27:27 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:41.774 17:27:27 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:26:41.774 17:27:27 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:26:41.774 17:27:27 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 58048bed-e48b-4d6c-b586-981bb041199a 00:26:42.033 17:27:28 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:42.033 { 00:26:42.033 "name": "58048bed-e48b-4d6c-b586-981bb041199a", 00:26:42.033 "aliases": [ 00:26:42.033 "lvs/nvme0n1p0" 00:26:42.033 ], 00:26:42.033 "product_name": "Logical Volume", 00:26:42.033 "block_size": 4096, 00:26:42.033 "num_blocks": 26476544, 00:26:42.033 "uuid": "58048bed-e48b-4d6c-b586-981bb041199a", 00:26:42.033 "assigned_rate_limits": { 00:26:42.033 "rw_ios_per_sec": 0, 00:26:42.033 "rw_mbytes_per_sec": 0, 00:26:42.033 "r_mbytes_per_sec": 0, 00:26:42.033 "w_mbytes_per_sec": 0 00:26:42.033 }, 00:26:42.033 "claimed": false, 00:26:42.033 "zoned": false, 00:26:42.033 "supported_io_types": { 00:26:42.033 "read": true, 00:26:42.033 "write": true, 00:26:42.033 "unmap": true, 00:26:42.033 "flush": false, 00:26:42.033 "reset": true, 00:26:42.033 "nvme_admin": false, 00:26:42.033 "nvme_io": false, 00:26:42.033 "nvme_io_md": false, 00:26:42.033 "write_zeroes": true, 00:26:42.033 "zcopy": false, 00:26:42.033 "get_zone_info": false, 00:26:42.033 "zone_management": false, 00:26:42.033 "zone_append": false, 00:26:42.033 "compare": false, 00:26:42.033 "compare_and_write": false, 00:26:42.033 "abort": false, 00:26:42.033 "seek_hole": true, 00:26:42.033 "seek_data": true, 00:26:42.033 "copy": false, 00:26:42.033 "nvme_iov_md": false 00:26:42.033 }, 00:26:42.033 "driver_specific": { 00:26:42.033 "lvol": { 00:26:42.033 "lvol_store_uuid": "fa41016a-79b3-4260-abe4-f968dfb83ea3", 00:26:42.033 "base_bdev": "nvme0n1", 00:26:42.033 "thin_provision": true, 00:26:42.033 "num_allocated_clusters": 0, 00:26:42.033 "snapshot": false, 00:26:42.033 "clone": false, 00:26:42.033 "esnap_clone": false 00:26:42.033 } 00:26:42.033 } 00:26:42.033 } 00:26:42.033 ]' 00:26:42.033 17:27:28 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:42.033 17:27:28 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:26:42.033 17:27:28 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:42.033 17:27:28 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:42.033 17:27:28 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:42.033 17:27:28 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:26:42.033 17:27:28 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:26:42.033 17:27:28 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:42.292 17:27:28 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:26:42.292 17:27:28 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 58048bed-e48b-4d6c-b586-981bb041199a 00:26:42.292 17:27:28 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=58048bed-e48b-4d6c-b586-981bb041199a 00:26:42.292 17:27:28 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:42.292 17:27:28 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:26:42.292 17:27:28 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:26:42.292 17:27:28 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 58048bed-e48b-4d6c-b586-981bb041199a 00:26:42.551 17:27:28 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:42.551 { 00:26:42.551 "name": "58048bed-e48b-4d6c-b586-981bb041199a", 00:26:42.551 "aliases": [ 00:26:42.551 "lvs/nvme0n1p0" 00:26:42.551 ], 00:26:42.551 "product_name": "Logical Volume", 00:26:42.551 "block_size": 4096, 00:26:42.551 "num_blocks": 26476544, 00:26:42.551 "uuid": "58048bed-e48b-4d6c-b586-981bb041199a", 00:26:42.551 "assigned_rate_limits": { 00:26:42.551 "rw_ios_per_sec": 0, 00:26:42.551 "rw_mbytes_per_sec": 0, 00:26:42.551 "r_mbytes_per_sec": 0, 00:26:42.551 "w_mbytes_per_sec": 0 00:26:42.551 }, 00:26:42.551 "claimed": false, 00:26:42.551 "zoned": false, 00:26:42.551 "supported_io_types": { 00:26:42.551 "read": true, 00:26:42.551 "write": true, 00:26:42.551 "unmap": true, 00:26:42.551 "flush": false, 00:26:42.551 "reset": true, 00:26:42.551 "nvme_admin": false, 00:26:42.551 "nvme_io": false, 00:26:42.551 "nvme_io_md": false, 00:26:42.551 "write_zeroes": true, 00:26:42.551 "zcopy": false, 00:26:42.551 "get_zone_info": false, 00:26:42.551 "zone_management": false, 00:26:42.551 "zone_append": false, 00:26:42.551 "compare": false, 00:26:42.551 "compare_and_write": false, 00:26:42.551 "abort": false, 00:26:42.551 "seek_hole": true, 00:26:42.551 "seek_data": true, 00:26:42.551 "copy": false, 00:26:42.551 "nvme_iov_md": false 00:26:42.551 }, 00:26:42.551 "driver_specific": { 00:26:42.551 "lvol": { 00:26:42.551 "lvol_store_uuid": "fa41016a-79b3-4260-abe4-f968dfb83ea3", 00:26:42.551 "base_bdev": "nvme0n1", 00:26:42.551 "thin_provision": true, 00:26:42.551 "num_allocated_clusters": 0, 00:26:42.551 "snapshot": false, 00:26:42.551 "clone": false, 00:26:42.551 "esnap_clone": false 00:26:42.551 } 00:26:42.551 } 00:26:42.551 } 00:26:42.551 ]' 00:26:42.551 17:27:28 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:42.551 17:27:28 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:26:42.551 17:27:28 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:42.551 17:27:28 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:42.551 17:27:28 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:42.551 17:27:28 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:26:42.551 17:27:28 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:26:42.551 17:27:28 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 58048bed-e48b-4d6c-b586-981bb041199a --l2p_dram_limit 10' 00:26:42.551 17:27:28 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:26:42.551 17:27:28 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:42.551 17:27:28 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:42.551 17:27:28 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:26:42.551 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:26:42.551 17:27:28 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 58048bed-e48b-4d6c-b586-981bb041199a --l2p_dram_limit 10 -c nvc0n1p0 00:26:42.811 [2024-07-24 17:27:28.978963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.811 [2024-07-24 17:27:28.979047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:42.811 [2024-07-24 17:27:28.979074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:42.811 [2024-07-24 17:27:28.979088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.811 [2024-07-24 17:27:28.979162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.811 [2024-07-24 17:27:28.979182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:42.811 [2024-07-24 17:27:28.979194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:26:42.811 [2024-07-24 17:27:28.979207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.811 [2024-07-24 17:27:28.979235] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:42.811 [2024-07-24 17:27:28.980204] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:42.811 [2024-07-24 17:27:28.980228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.811 [2024-07-24 17:27:28.980246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:42.811 [2024-07-24 17:27:28.980258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:26:42.811 [2024-07-24 17:27:28.980271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.811 [2024-07-24 17:27:28.980399] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 816f2449-3799-4931-8b3f-6dea6be81c44 00:26:42.811 [2024-07-24 17:27:28.982265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.811 [2024-07-24 17:27:28.982311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:42.811 [2024-07-24 17:27:28.982329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:42.811 [2024-07-24 17:27:28.982340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.811 [2024-07-24 17:27:28.992029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.811 [2024-07-24 17:27:28.992081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:42.811 [2024-07-24 17:27:28.992098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.623 ms 00:26:42.811 [2024-07-24 17:27:28.992109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.811 [2024-07-24 17:27:28.992228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.811 [2024-07-24 17:27:28.992248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:42.811 [2024-07-24 17:27:28.992263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:26:42.811 [2024-07-24 17:27:28.992273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.811 [2024-07-24 17:27:28.992358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.811 [2024-07-24 17:27:28.992374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:42.811 [2024-07-24 17:27:28.992391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:42.811 [2024-07-24 17:27:28.992408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.811 [2024-07-24 17:27:28.992450] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:42.811 [2024-07-24 17:27:28.997195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.811 [2024-07-24 17:27:28.997247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:42.811 [2024-07-24 17:27:28.997261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.758 ms 00:26:42.811 [2024-07-24 17:27:28.997274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.811 [2024-07-24 17:27:28.997316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.811 [2024-07-24 17:27:28.997334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:42.811 [2024-07-24 17:27:28.997345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:42.811 [2024-07-24 17:27:28.997357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.811 [2024-07-24 17:27:28.997397] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:42.811 [2024-07-24 17:27:28.997588] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:42.811 [2024-07-24 17:27:28.997605] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:42.811 [2024-07-24 17:27:28.997624] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:42.811 [2024-07-24 17:27:28.997638] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:42.811 [2024-07-24 17:27:28.997653] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:42.811 [2024-07-24 17:27:28.997666] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:42.811 [2024-07-24 17:27:28.997698] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:42.811 [2024-07-24 17:27:28.997730] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:42.811 [2024-07-24 17:27:28.997744] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:42.811 [2024-07-24 17:27:28.997755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.811 [2024-07-24 17:27:28.997769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:42.811 [2024-07-24 17:27:28.997781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:26:42.811 [2024-07-24 17:27:28.997794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.811 [2024-07-24 17:27:28.997879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.811 [2024-07-24 17:27:28.997895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:42.812 [2024-07-24 17:27:28.997907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:26:42.812 [2024-07-24 17:27:28.997923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.812 [2024-07-24 17:27:28.998049] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:42.812 [2024-07-24 17:27:28.998069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:42.812 [2024-07-24 17:27:28.998092] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:42.812 [2024-07-24 17:27:28.998106] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.812 [2024-07-24 17:27:28.998118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:42.812 [2024-07-24 17:27:28.998130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:42.812 [2024-07-24 17:27:28.998140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:42.812 [2024-07-24 17:27:28.998154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:42.812 [2024-07-24 17:27:28.998164] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:42.812 [2024-07-24 17:27:28.998179] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:42.812 [2024-07-24 17:27:28.998189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:42.812 [2024-07-24 17:27:28.998202] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:42.812 [2024-07-24 17:27:28.998211] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:42.812 [2024-07-24 17:27:28.998223] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:42.812 [2024-07-24 17:27:28.998234] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:42.812 [2024-07-24 17:27:28.998246] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.812 [2024-07-24 17:27:28.998255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:42.812 [2024-07-24 17:27:28.998270] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:42.812 [2024-07-24 17:27:28.998280] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.812 [2024-07-24 17:27:28.998292] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:42.812 [2024-07-24 17:27:28.998302] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:42.812 [2024-07-24 17:27:28.998314] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.812 [2024-07-24 17:27:28.998324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:42.812 [2024-07-24 17:27:28.998336] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:42.812 [2024-07-24 17:27:28.998346] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.812 [2024-07-24 17:27:28.998358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:42.812 [2024-07-24 17:27:28.998367] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:42.812 [2024-07-24 17:27:28.998379] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.812 [2024-07-24 17:27:28.998389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:42.812 [2024-07-24 17:27:28.998400] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:42.812 [2024-07-24 17:27:28.998410] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.812 [2024-07-24 17:27:28.998423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:42.812 [2024-07-24 17:27:28.998432] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:42.812 [2024-07-24 17:27:28.998447] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:42.812 [2024-07-24 17:27:28.998457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:42.812 [2024-07-24 17:27:28.998471] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:42.812 [2024-07-24 17:27:28.998481] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:42.812 [2024-07-24 17:27:28.998493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:42.812 [2024-07-24 17:27:28.998503] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:42.812 [2024-07-24 17:27:28.998517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.812 [2024-07-24 17:27:28.998527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:42.812 [2024-07-24 17:27:28.998539] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:42.812 [2024-07-24 17:27:28.998549] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.812 [2024-07-24 17:27:28.998561] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:42.812 [2024-07-24 17:27:28.998571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:42.812 [2024-07-24 17:27:28.998584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:42.812 [2024-07-24 17:27:28.998594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.812 [2024-07-24 17:27:28.998607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:42.812 [2024-07-24 17:27:28.998617] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:42.812 [2024-07-24 17:27:28.998632] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:42.812 [2024-07-24 17:27:28.998642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:42.812 [2024-07-24 17:27:28.998654] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:42.812 [2024-07-24 17:27:28.998679] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:42.812 [2024-07-24 17:27:28.998697] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:42.812 [2024-07-24 17:27:28.998713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:42.812 [2024-07-24 17:27:28.998727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:42.812 [2024-07-24 17:27:28.998738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:42.812 [2024-07-24 17:27:28.998751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:42.812 [2024-07-24 17:27:28.998761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:42.812 [2024-07-24 17:27:28.998775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:42.812 [2024-07-24 17:27:28.998786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:42.812 [2024-07-24 17:27:28.998799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:42.812 [2024-07-24 17:27:28.998810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:42.812 [2024-07-24 17:27:28.998822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:42.812 [2024-07-24 17:27:28.998833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:42.812 [2024-07-24 17:27:28.998847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:42.812 [2024-07-24 17:27:28.998858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:42.812 [2024-07-24 17:27:28.998871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:42.812 [2024-07-24 17:27:28.998882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:42.812 [2024-07-24 17:27:28.998894] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:42.813 [2024-07-24 17:27:28.998906] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:42.813 [2024-07-24 17:27:28.998951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:42.813 [2024-07-24 17:27:28.998977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:42.813 [2024-07-24 17:27:28.998991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:42.813 [2024-07-24 17:27:28.999002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:42.813 [2024-07-24 17:27:28.999017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.813 [2024-07-24 17:27:28.999029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:42.813 [2024-07-24 17:27:28.999043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:26:42.813 [2024-07-24 17:27:28.999054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.813 [2024-07-24 17:27:28.999113] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:42.813 [2024-07-24 17:27:28.999129] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:46.099 [2024-07-24 17:27:31.769715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:31.769796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:46.099 [2024-07-24 17:27:31.769820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2770.614 ms 00:26:46.099 [2024-07-24 17:27:31.769832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:31.805717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:31.805782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:46.099 [2024-07-24 17:27:31.805817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.628 ms 00:26:46.099 [2024-07-24 17:27:31.805829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:31.806008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:31.806059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:46.099 [2024-07-24 17:27:31.806079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:26:46.099 [2024-07-24 17:27:31.806090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:31.844705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:31.844766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:46.099 [2024-07-24 17:27:31.844786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.554 ms 00:26:46.099 [2024-07-24 17:27:31.844799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:31.844857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:31.844873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:46.099 [2024-07-24 17:27:31.844893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:46.099 [2024-07-24 17:27:31.844905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:31.845556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:31.845581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:46.099 [2024-07-24 17:27:31.845599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:26:46.099 [2024-07-24 17:27:31.845609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:31.845794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:31.845816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:46.099 [2024-07-24 17:27:31.845830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:26:46.099 [2024-07-24 17:27:31.845856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:31.865035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:31.865088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:46.099 [2024-07-24 17:27:31.865108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.132 ms 00:26:46.099 [2024-07-24 17:27:31.865119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:31.877468] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:46.099 [2024-07-24 17:27:31.881643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:31.881718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:46.099 [2024-07-24 17:27:31.881734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.433 ms 00:26:46.099 [2024-07-24 17:27:31.881747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:31.967921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:31.967996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:46.099 [2024-07-24 17:27:31.968016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.138 ms 00:26:46.099 [2024-07-24 17:27:31.968031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:31.968266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:31.968315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:46.099 [2024-07-24 17:27:31.968328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:26:46.099 [2024-07-24 17:27:31.968345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:31.997648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:31.997752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:46.099 [2024-07-24 17:27:31.997772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.241 ms 00:26:46.099 [2024-07-24 17:27:31.997792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:32.026155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:32.026215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:46.099 [2024-07-24 17:27:32.026231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.315 ms 00:26:46.099 [2024-07-24 17:27:32.026244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:32.027060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:32.027096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:46.099 [2024-07-24 17:27:32.027114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:26:46.099 [2024-07-24 17:27:32.027128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:32.112287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:32.112366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:46.099 [2024-07-24 17:27:32.112386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.095 ms 00:26:46.099 [2024-07-24 17:27:32.112403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:32.139341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:32.139417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:46.099 [2024-07-24 17:27:32.139434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.889 ms 00:26:46.099 [2024-07-24 17:27:32.139448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:32.166552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:32.166610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:46.099 [2024-07-24 17:27:32.166644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.057 ms 00:26:46.099 [2024-07-24 17:27:32.166691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:32.193552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:32.193612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:46.099 [2024-07-24 17:27:32.193628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.807 ms 00:26:46.099 [2024-07-24 17:27:32.193642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:32.193702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:32.193724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:46.099 [2024-07-24 17:27:32.193737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:46.099 [2024-07-24 17:27:32.193752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:32.193874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.099 [2024-07-24 17:27:32.193899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:46.099 [2024-07-24 17:27:32.193928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:46.099 [2024-07-24 17:27:32.193941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.099 [2024-07-24 17:27:32.195512] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3215.889 ms, result 0 00:26:46.099 { 00:26:46.099 "name": "ftl0", 00:26:46.099 "uuid": "816f2449-3799-4931-8b3f-6dea6be81c44" 00:26:46.099 } 00:26:46.099 17:27:32 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:26:46.099 17:27:32 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:46.358 17:27:32 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:26:46.358 17:27:32 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:46.617 [2024-07-24 17:27:32.754476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.617 [2024-07-24 17:27:32.754545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:46.617 [2024-07-24 17:27:32.754568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:46.617 [2024-07-24 17:27:32.754580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.617 [2024-07-24 17:27:32.754616] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:46.617 [2024-07-24 17:27:32.757962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.617 [2024-07-24 17:27:32.758010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:46.617 [2024-07-24 17:27:32.758024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.322 ms 00:26:46.617 [2024-07-24 17:27:32.758037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.617 [2024-07-24 17:27:32.758387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.617 [2024-07-24 17:27:32.758421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:46.617 [2024-07-24 17:27:32.758447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:26:46.617 [2024-07-24 17:27:32.758463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.617 [2024-07-24 17:27:32.761401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.617 [2024-07-24 17:27:32.761446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:46.617 [2024-07-24 17:27:32.761459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.915 ms 00:26:46.617 [2024-07-24 17:27:32.761471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.617 [2024-07-24 17:27:32.766885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.617 [2024-07-24 17:27:32.766967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:46.617 [2024-07-24 17:27:32.766981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.392 ms 00:26:46.617 [2024-07-24 17:27:32.766994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.617 [2024-07-24 17:27:32.793556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.617 [2024-07-24 17:27:32.793615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:46.617 [2024-07-24 17:27:32.793632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.497 ms 00:26:46.617 [2024-07-24 17:27:32.793645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.617 [2024-07-24 17:27:32.810438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.617 [2024-07-24 17:27:32.810499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:46.617 [2024-07-24 17:27:32.810515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.739 ms 00:26:46.617 [2024-07-24 17:27:32.810529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.617 [2024-07-24 17:27:32.810720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.617 [2024-07-24 17:27:32.810746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:46.617 [2024-07-24 17:27:32.810759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:26:46.617 [2024-07-24 17:27:32.810773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.617 [2024-07-24 17:27:32.836628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.617 [2024-07-24 17:27:32.836692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:46.617 [2024-07-24 17:27:32.836708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.830 ms 00:26:46.617 [2024-07-24 17:27:32.836720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.877 [2024-07-24 17:27:32.862876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.877 [2024-07-24 17:27:32.862955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:46.877 [2024-07-24 17:27:32.862973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.112 ms 00:26:46.877 [2024-07-24 17:27:32.862986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.877 [2024-07-24 17:27:32.891633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.877 [2024-07-24 17:27:32.891733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:46.877 [2024-07-24 17:27:32.891752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.601 ms 00:26:46.877 [2024-07-24 17:27:32.891766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.877 [2024-07-24 17:27:32.918352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.877 [2024-07-24 17:27:32.918411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:46.877 [2024-07-24 17:27:32.918427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.455 ms 00:26:46.877 [2024-07-24 17:27:32.918440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.877 [2024-07-24 17:27:32.918484] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:46.877 [2024-07-24 17:27:32.918511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:46.877 [2024-07-24 17:27:32.918861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.918875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.918886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.918900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.918911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.918935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.918960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.918975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.918988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:46.878 [2024-07-24 17:27:32.919906] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:46.878 [2024-07-24 17:27:32.919917] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 816f2449-3799-4931-8b3f-6dea6be81c44 00:26:46.878 [2024-07-24 17:27:32.919932] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:46.878 [2024-07-24 17:27:32.919942] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:46.878 [2024-07-24 17:27:32.919957] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:46.878 [2024-07-24 17:27:32.919968] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:46.878 [2024-07-24 17:27:32.919980] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:46.878 [2024-07-24 17:27:32.919991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:46.878 [2024-07-24 17:27:32.920004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:46.878 [2024-07-24 17:27:32.920013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:46.878 [2024-07-24 17:27:32.920025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:46.878 [2024-07-24 17:27:32.920037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.878 [2024-07-24 17:27:32.920050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:46.878 [2024-07-24 17:27:32.920076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.554 ms 00:26:46.878 [2024-07-24 17:27:32.920092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.878 [2024-07-24 17:27:32.935491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.878 [2024-07-24 17:27:32.935548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:46.878 [2024-07-24 17:27:32.935564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.341 ms 00:26:46.879 [2024-07-24 17:27:32.935578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.879 [2024-07-24 17:27:32.936100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.879 [2024-07-24 17:27:32.936133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:46.879 [2024-07-24 17:27:32.936154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:26:46.879 [2024-07-24 17:27:32.936169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.879 [2024-07-24 17:27:32.983065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.879 [2024-07-24 17:27:32.983127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:46.879 [2024-07-24 17:27:32.983143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.879 [2024-07-24 17:27:32.983157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.879 [2024-07-24 17:27:32.983228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.879 [2024-07-24 17:27:32.983247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:46.879 [2024-07-24 17:27:32.983279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.879 [2024-07-24 17:27:32.983291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.879 [2024-07-24 17:27:32.983407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.879 [2024-07-24 17:27:32.983432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:46.879 [2024-07-24 17:27:32.983445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.879 [2024-07-24 17:27:32.983458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.879 [2024-07-24 17:27:32.983484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.879 [2024-07-24 17:27:32.983512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:46.879 [2024-07-24 17:27:32.983525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.879 [2024-07-24 17:27:32.983541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.879 [2024-07-24 17:27:33.076286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.879 [2024-07-24 17:27:33.076356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:46.879 [2024-07-24 17:27:33.076374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.879 [2024-07-24 17:27:33.076388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.138 [2024-07-24 17:27:33.147148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.138 [2024-07-24 17:27:33.147218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:47.138 [2024-07-24 17:27:33.147240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.138 [2024-07-24 17:27:33.147253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.138 [2024-07-24 17:27:33.147365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.138 [2024-07-24 17:27:33.147387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:47.138 [2024-07-24 17:27:33.147400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.138 [2024-07-24 17:27:33.147413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.138 [2024-07-24 17:27:33.147551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.138 [2024-07-24 17:27:33.147577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:47.138 [2024-07-24 17:27:33.147590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.138 [2024-07-24 17:27:33.147603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.138 [2024-07-24 17:27:33.147761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.138 [2024-07-24 17:27:33.147791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:47.138 [2024-07-24 17:27:33.147805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.138 [2024-07-24 17:27:33.147819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.138 [2024-07-24 17:27:33.147875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.138 [2024-07-24 17:27:33.147899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:47.138 [2024-07-24 17:27:33.147912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.138 [2024-07-24 17:27:33.147926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.138 [2024-07-24 17:27:33.147981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.138 [2024-07-24 17:27:33.148000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:47.138 [2024-07-24 17:27:33.148012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.138 [2024-07-24 17:27:33.148026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.138 [2024-07-24 17:27:33.148099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.138 [2024-07-24 17:27:33.148123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:47.138 [2024-07-24 17:27:33.148136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.138 [2024-07-24 17:27:33.148149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.138 [2024-07-24 17:27:33.148326] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 393.810 ms, result 0 00:26:47.138 true 00:26:47.138 17:27:33 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 80133 00:26:47.138 17:27:33 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 80133 ']' 00:26:47.138 17:27:33 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 80133 00:26:47.138 17:27:33 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:26:47.138 17:27:33 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:47.138 17:27:33 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80133 00:26:47.138 17:27:33 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:47.138 17:27:33 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:47.138 killing process with pid 80133 00:26:47.138 17:27:33 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80133' 00:26:47.138 17:27:33 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 80133 00:26:47.138 17:27:33 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 80133 00:26:49.705 17:27:35 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:53.892 262144+0 records in 00:26:53.892 262144+0 records out 00:26:53.892 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.00848 s, 268 MB/s 00:26:53.892 17:27:39 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:55.266 17:27:41 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:55.266 [2024-07-24 17:27:41.483469] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:26:55.267 [2024-07-24 17:27:41.483715] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80359 ] 00:26:55.524 [2024-07-24 17:27:41.659031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.783 [2024-07-24 17:27:41.902066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.041 [2024-07-24 17:27:42.203030] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:56.041 [2024-07-24 17:27:42.203125] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:56.300 [2024-07-24 17:27:42.363523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.300 [2024-07-24 17:27:42.363587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:56.300 [2024-07-24 17:27:42.363622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:56.300 [2024-07-24 17:27:42.363633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.300 [2024-07-24 17:27:42.363705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.300 [2024-07-24 17:27:42.363724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:56.300 [2024-07-24 17:27:42.363735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:26:56.300 [2024-07-24 17:27:42.363749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.300 [2024-07-24 17:27:42.363782] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:56.300 [2024-07-24 17:27:42.364727] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:56.300 [2024-07-24 17:27:42.364795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.300 [2024-07-24 17:27:42.364825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:56.300 [2024-07-24 17:27:42.364836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.024 ms 00:26:56.300 [2024-07-24 17:27:42.364847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.300 [2024-07-24 17:27:42.366754] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:56.300 [2024-07-24 17:27:42.381063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.300 [2024-07-24 17:27:42.381136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:56.300 [2024-07-24 17:27:42.381168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.310 ms 00:26:56.300 [2024-07-24 17:27:42.381178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.300 [2024-07-24 17:27:42.381244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.300 [2024-07-24 17:27:42.381265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:56.300 [2024-07-24 17:27:42.381277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:56.300 [2024-07-24 17:27:42.381286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.300 [2024-07-24 17:27:42.389951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.300 [2024-07-24 17:27:42.390007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:56.300 [2024-07-24 17:27:42.390037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.551 ms 00:26:56.300 [2024-07-24 17:27:42.390047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.300 [2024-07-24 17:27:42.390136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.300 [2024-07-24 17:27:42.390154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:56.300 [2024-07-24 17:27:42.390165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:26:56.300 [2024-07-24 17:27:42.390175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.300 [2024-07-24 17:27:42.390230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.300 [2024-07-24 17:27:42.390277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:56.300 [2024-07-24 17:27:42.390289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:56.300 [2024-07-24 17:27:42.390316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.300 [2024-07-24 17:27:42.390348] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:56.300 [2024-07-24 17:27:42.394833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.300 [2024-07-24 17:27:42.394884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:56.300 [2024-07-24 17:27:42.394914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.493 ms 00:26:56.300 [2024-07-24 17:27:42.394925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.300 [2024-07-24 17:27:42.395008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.300 [2024-07-24 17:27:42.395025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:56.300 [2024-07-24 17:27:42.395037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:56.300 [2024-07-24 17:27:42.395047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.300 [2024-07-24 17:27:42.395111] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:56.300 [2024-07-24 17:27:42.395144] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:56.300 [2024-07-24 17:27:42.395219] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:56.300 [2024-07-24 17:27:42.395259] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:56.300 [2024-07-24 17:27:42.395373] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:56.300 [2024-07-24 17:27:42.395388] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:56.300 [2024-07-24 17:27:42.395402] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:56.300 [2024-07-24 17:27:42.395417] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:56.300 [2024-07-24 17:27:42.395430] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:56.300 [2024-07-24 17:27:42.395441] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:56.300 [2024-07-24 17:27:42.395452] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:56.300 [2024-07-24 17:27:42.395462] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:56.300 [2024-07-24 17:27:42.395472] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:56.300 [2024-07-24 17:27:42.395483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.300 [2024-07-24 17:27:42.395498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:56.300 [2024-07-24 17:27:42.395510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:26:56.300 [2024-07-24 17:27:42.395520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.300 [2024-07-24 17:27:42.395608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.300 [2024-07-24 17:27:42.395622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:56.300 [2024-07-24 17:27:42.395633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:26:56.300 [2024-07-24 17:27:42.395643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.301 [2024-07-24 17:27:42.395758] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:56.301 [2024-07-24 17:27:42.395794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:56.301 [2024-07-24 17:27:42.395813] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:56.301 [2024-07-24 17:27:42.395824] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:56.301 [2024-07-24 17:27:42.395835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:56.301 [2024-07-24 17:27:42.395844] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:56.301 [2024-07-24 17:27:42.395854] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:56.301 [2024-07-24 17:27:42.395864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:56.301 [2024-07-24 17:27:42.395874] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:56.301 [2024-07-24 17:27:42.395883] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:56.301 [2024-07-24 17:27:42.395893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:56.301 [2024-07-24 17:27:42.395902] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:56.301 [2024-07-24 17:27:42.395912] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:56.301 [2024-07-24 17:27:42.395922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:56.301 [2024-07-24 17:27:42.395932] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:56.301 [2024-07-24 17:27:42.395941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:56.301 [2024-07-24 17:27:42.395951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:56.301 [2024-07-24 17:27:42.395961] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:56.301 [2024-07-24 17:27:42.395971] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:56.301 [2024-07-24 17:27:42.395982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:56.301 [2024-07-24 17:27:42.396005] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:56.301 [2024-07-24 17:27:42.396015] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:56.301 [2024-07-24 17:27:42.396026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:56.301 [2024-07-24 17:27:42.396036] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:56.301 [2024-07-24 17:27:42.396046] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:56.301 [2024-07-24 17:27:42.396055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:56.301 [2024-07-24 17:27:42.396065] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:56.301 [2024-07-24 17:27:42.396074] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:56.301 [2024-07-24 17:27:42.396083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:56.301 [2024-07-24 17:27:42.396093] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:56.301 [2024-07-24 17:27:42.396103] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:56.301 [2024-07-24 17:27:42.396112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:56.301 [2024-07-24 17:27:42.396122] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:56.301 [2024-07-24 17:27:42.396131] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:56.301 [2024-07-24 17:27:42.396142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:56.301 [2024-07-24 17:27:42.396152] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:56.301 [2024-07-24 17:27:42.396161] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:56.301 [2024-07-24 17:27:42.396171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:56.301 [2024-07-24 17:27:42.396181] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:56.301 [2024-07-24 17:27:42.396190] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:56.301 [2024-07-24 17:27:42.396200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:56.301 [2024-07-24 17:27:42.396209] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:56.301 [2024-07-24 17:27:42.396218] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:56.301 [2024-07-24 17:27:42.396227] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:56.301 [2024-07-24 17:27:42.396238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:56.301 [2024-07-24 17:27:42.396248] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:56.301 [2024-07-24 17:27:42.396258] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:56.301 [2024-07-24 17:27:42.396268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:56.301 [2024-07-24 17:27:42.396278] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:56.301 [2024-07-24 17:27:42.396288] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:56.301 [2024-07-24 17:27:42.396298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:56.301 [2024-07-24 17:27:42.396310] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:56.301 [2024-07-24 17:27:42.396320] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:56.301 [2024-07-24 17:27:42.396331] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:56.301 [2024-07-24 17:27:42.396344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:56.301 [2024-07-24 17:27:42.396356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:56.301 [2024-07-24 17:27:42.396367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:56.301 [2024-07-24 17:27:42.396377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:56.301 [2024-07-24 17:27:42.396388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:56.301 [2024-07-24 17:27:42.396398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:56.301 [2024-07-24 17:27:42.396409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:56.301 [2024-07-24 17:27:42.396419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:56.301 [2024-07-24 17:27:42.396430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:56.301 [2024-07-24 17:27:42.396440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:56.301 [2024-07-24 17:27:42.396449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:56.301 [2024-07-24 17:27:42.396460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:56.301 [2024-07-24 17:27:42.396470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:56.301 [2024-07-24 17:27:42.396480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:56.301 [2024-07-24 17:27:42.396491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:56.301 [2024-07-24 17:27:42.396501] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:56.301 [2024-07-24 17:27:42.396512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:56.301 [2024-07-24 17:27:42.396529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:56.301 [2024-07-24 17:27:42.396540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:56.301 [2024-07-24 17:27:42.396551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:56.301 [2024-07-24 17:27:42.396561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:56.301 [2024-07-24 17:27:42.396573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.301 [2024-07-24 17:27:42.396584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:56.301 [2024-07-24 17:27:42.396596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.892 ms 00:26:56.301 [2024-07-24 17:27:42.396606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.301 [2024-07-24 17:27:42.442235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.301 [2024-07-24 17:27:42.442309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:56.301 [2024-07-24 17:27:42.442344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.528 ms 00:26:56.301 [2024-07-24 17:27:42.442356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.301 [2024-07-24 17:27:42.442466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.301 [2024-07-24 17:27:42.442482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:56.301 [2024-07-24 17:27:42.442492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:26:56.301 [2024-07-24 17:27:42.442502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.301 [2024-07-24 17:27:42.484074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.301 [2024-07-24 17:27:42.484146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:56.301 [2024-07-24 17:27:42.484178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.474 ms 00:26:56.301 [2024-07-24 17:27:42.484188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.301 [2024-07-24 17:27:42.484246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.301 [2024-07-24 17:27:42.484261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:56.301 [2024-07-24 17:27:42.484273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:56.301 [2024-07-24 17:27:42.484289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.301 [2024-07-24 17:27:42.485015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.301 [2024-07-24 17:27:42.485057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:56.301 [2024-07-24 17:27:42.485071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.637 ms 00:26:56.301 [2024-07-24 17:27:42.485081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.302 [2024-07-24 17:27:42.485287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.302 [2024-07-24 17:27:42.485340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:56.302 [2024-07-24 17:27:42.485368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:26:56.302 [2024-07-24 17:27:42.485379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.302 [2024-07-24 17:27:42.501981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.302 [2024-07-24 17:27:42.502037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:56.302 [2024-07-24 17:27:42.502068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.571 ms 00:26:56.302 [2024-07-24 17:27:42.502084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.302 [2024-07-24 17:27:42.516542] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:56.302 [2024-07-24 17:27:42.516604] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:56.302 [2024-07-24 17:27:42.516637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.302 [2024-07-24 17:27:42.516648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:56.302 [2024-07-24 17:27:42.516670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.436 ms 00:26:56.302 [2024-07-24 17:27:42.516682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.560 [2024-07-24 17:27:42.541007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.560 [2024-07-24 17:27:42.541064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:56.560 [2024-07-24 17:27:42.541101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.283 ms 00:26:56.560 [2024-07-24 17:27:42.541111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.560 [2024-07-24 17:27:42.553953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.560 [2024-07-24 17:27:42.554009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:56.560 [2024-07-24 17:27:42.554039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.798 ms 00:26:56.560 [2024-07-24 17:27:42.554049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.560 [2024-07-24 17:27:42.566572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.560 [2024-07-24 17:27:42.566631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:56.560 [2024-07-24 17:27:42.566669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.484 ms 00:26:56.560 [2024-07-24 17:27:42.566681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.560 [2024-07-24 17:27:42.567566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.560 [2024-07-24 17:27:42.567620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:56.560 [2024-07-24 17:27:42.567650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:26:56.560 [2024-07-24 17:27:42.567688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.560 [2024-07-24 17:27:42.632741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.560 [2024-07-24 17:27:42.632829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:56.560 [2024-07-24 17:27:42.632865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.025 ms 00:26:56.560 [2024-07-24 17:27:42.632876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.560 [2024-07-24 17:27:42.642976] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:56.560 [2024-07-24 17:27:42.645161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.560 [2024-07-24 17:27:42.645206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:56.560 [2024-07-24 17:27:42.645236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.206 ms 00:26:56.560 [2024-07-24 17:27:42.645246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.560 [2024-07-24 17:27:42.645340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.560 [2024-07-24 17:27:42.645359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:56.560 [2024-07-24 17:27:42.645372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:56.560 [2024-07-24 17:27:42.645382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.560 [2024-07-24 17:27:42.645528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.560 [2024-07-24 17:27:42.645562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:56.560 [2024-07-24 17:27:42.645575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:56.560 [2024-07-24 17:27:42.645585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.560 [2024-07-24 17:27:42.645618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.560 [2024-07-24 17:27:42.645632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:56.560 [2024-07-24 17:27:42.645643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:56.560 [2024-07-24 17:27:42.645670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.560 [2024-07-24 17:27:42.645723] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:56.560 [2024-07-24 17:27:42.645740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.560 [2024-07-24 17:27:42.645751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:56.560 [2024-07-24 17:27:42.645768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:56.560 [2024-07-24 17:27:42.645778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.560 [2024-07-24 17:27:42.671681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.560 [2024-07-24 17:27:42.671746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:56.560 [2024-07-24 17:27:42.671777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.876 ms 00:26:56.560 [2024-07-24 17:27:42.671788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.560 [2024-07-24 17:27:42.671871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.560 [2024-07-24 17:27:42.671894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:56.560 [2024-07-24 17:27:42.671906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:56.560 [2024-07-24 17:27:42.671916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.560 [2024-07-24 17:27:42.673558] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.450 ms, result 0 00:27:40.084  Copying: 22/1024 [MB] (22 MBps) Copying: 46/1024 [MB] (23 MBps) Copying: 69/1024 [MB] (23 MBps) Copying: 92/1024 [MB] (22 MBps) Copying: 115/1024 [MB] (23 MBps) Copying: 139/1024 [MB] (23 MBps) Copying: 163/1024 [MB] (23 MBps) Copying: 186/1024 [MB] (23 MBps) Copying: 210/1024 [MB] (23 MBps) Copying: 234/1024 [MB] (23 MBps) Copying: 257/1024 [MB] (23 MBps) Copying: 280/1024 [MB] (23 MBps) Copying: 305/1024 [MB] (24 MBps) Copying: 328/1024 [MB] (23 MBps) Copying: 351/1024 [MB] (23 MBps) Copying: 374/1024 [MB] (23 MBps) Copying: 399/1024 [MB] (25 MBps) Copying: 423/1024 [MB] (23 MBps) Copying: 446/1024 [MB] (23 MBps) Copying: 470/1024 [MB] (24 MBps) Copying: 496/1024 [MB] (25 MBps) Copying: 519/1024 [MB] (23 MBps) Copying: 542/1024 [MB] (22 MBps) Copying: 565/1024 [MB] (23 MBps) Copying: 590/1024 [MB] (24 MBps) Copying: 613/1024 [MB] (23 MBps) Copying: 637/1024 [MB] (23 MBps) Copying: 660/1024 [MB] (23 MBps) Copying: 685/1024 [MB] (24 MBps) Copying: 707/1024 [MB] (22 MBps) Copying: 730/1024 [MB] (22 MBps) Copying: 753/1024 [MB] (23 MBps) Copying: 778/1024 [MB] (24 MBps) Copying: 801/1024 [MB] (23 MBps) Copying: 825/1024 [MB] (23 MBps) Copying: 849/1024 [MB] (23 MBps) Copying: 872/1024 [MB] (23 MBps) Copying: 896/1024 [MB] (23 MBps) Copying: 920/1024 [MB] (23 MBps) Copying: 943/1024 [MB] (23 MBps) Copying: 967/1024 [MB] (23 MBps) Copying: 991/1024 [MB] (23 MBps) Copying: 1014/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-24 17:28:26.089560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.084 [2024-07-24 17:28:26.089627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:40.084 [2024-07-24 17:28:26.089694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:40.084 [2024-07-24 17:28:26.089709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.084 [2024-07-24 17:28:26.089737] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:40.084 [2024-07-24 17:28:26.092926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.084 [2024-07-24 17:28:26.092975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:40.084 [2024-07-24 17:28:26.092988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.168 ms 00:27:40.084 [2024-07-24 17:28:26.092998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.084 [2024-07-24 17:28:26.094911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.084 [2024-07-24 17:28:26.094973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:40.084 [2024-07-24 17:28:26.094989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.883 ms 00:27:40.084 [2024-07-24 17:28:26.094999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.084 [2024-07-24 17:28:26.110479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.084 [2024-07-24 17:28:26.110536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:40.084 [2024-07-24 17:28:26.110567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.460 ms 00:27:40.084 [2024-07-24 17:28:26.110577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.084 [2024-07-24 17:28:26.115838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.084 [2024-07-24 17:28:26.115878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:40.084 [2024-07-24 17:28:26.115905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.227 ms 00:27:40.084 [2024-07-24 17:28:26.115915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.084 [2024-07-24 17:28:26.141775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.084 [2024-07-24 17:28:26.141816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:40.084 [2024-07-24 17:28:26.141846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.806 ms 00:27:40.084 [2024-07-24 17:28:26.141856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.084 [2024-07-24 17:28:26.159633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.084 [2024-07-24 17:28:26.159733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:40.084 [2024-07-24 17:28:26.159766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.737 ms 00:27:40.084 [2024-07-24 17:28:26.159778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.084 [2024-07-24 17:28:26.159940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.084 [2024-07-24 17:28:26.159961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:40.084 [2024-07-24 17:28:26.159982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:27:40.084 [2024-07-24 17:28:26.159999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.084 [2024-07-24 17:28:26.187222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.084 [2024-07-24 17:28:26.187260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:40.084 [2024-07-24 17:28:26.187290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.203 ms 00:27:40.084 [2024-07-24 17:28:26.187303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.084 [2024-07-24 17:28:26.211537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.084 [2024-07-24 17:28:26.211576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:40.084 [2024-07-24 17:28:26.211605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.197 ms 00:27:40.084 [2024-07-24 17:28:26.211614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.084 [2024-07-24 17:28:26.235432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.084 [2024-07-24 17:28:26.235470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:40.084 [2024-07-24 17:28:26.235498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.770 ms 00:27:40.084 [2024-07-24 17:28:26.235522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.084 [2024-07-24 17:28:26.259212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.084 [2024-07-24 17:28:26.259281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:40.084 [2024-07-24 17:28:26.259309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.630 ms 00:27:40.084 [2024-07-24 17:28:26.259319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.084 [2024-07-24 17:28:26.259357] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:40.084 [2024-07-24 17:28:26.259377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:40.084 [2024-07-24 17:28:26.259389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:40.084 [2024-07-24 17:28:26.259398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:40.084 [2024-07-24 17:28:26.259408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:40.084 [2024-07-24 17:28:26.259418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:40.084 [2024-07-24 17:28:26.259428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:40.084 [2024-07-24 17:28:26.259437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:40.084 [2024-07-24 17:28:26.259446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:40.084 [2024-07-24 17:28:26.259456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.259995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:40.085 [2024-07-24 17:28:26.260377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:40.086 [2024-07-24 17:28:26.260387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:40.086 [2024-07-24 17:28:26.260396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:40.086 [2024-07-24 17:28:26.260414] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:40.086 [2024-07-24 17:28:26.260423] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 816f2449-3799-4931-8b3f-6dea6be81c44 00:27:40.086 [2024-07-24 17:28:26.260433] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:40.086 [2024-07-24 17:28:26.260449] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:40.086 [2024-07-24 17:28:26.260458] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:40.086 [2024-07-24 17:28:26.260468] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:40.086 [2024-07-24 17:28:26.260477] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:40.086 [2024-07-24 17:28:26.260486] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:40.086 [2024-07-24 17:28:26.260495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:40.086 [2024-07-24 17:28:26.260503] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:40.086 [2024-07-24 17:28:26.260512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:40.086 [2024-07-24 17:28:26.260521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.086 [2024-07-24 17:28:26.260530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:40.086 [2024-07-24 17:28:26.260540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.166 ms 00:27:40.086 [2024-07-24 17:28:26.260555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.086 [2024-07-24 17:28:26.274530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.086 [2024-07-24 17:28:26.274566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:40.086 [2024-07-24 17:28:26.274595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.954 ms 00:27:40.086 [2024-07-24 17:28:26.274616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.086 [2024-07-24 17:28:26.275117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.086 [2024-07-24 17:28:26.275135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:40.086 [2024-07-24 17:28:26.275147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:27:40.086 [2024-07-24 17:28:26.275156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.086 [2024-07-24 17:28:26.305929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.086 [2024-07-24 17:28:26.305984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:40.086 [2024-07-24 17:28:26.306014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.086 [2024-07-24 17:28:26.306024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.086 [2024-07-24 17:28:26.306091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.086 [2024-07-24 17:28:26.306104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:40.086 [2024-07-24 17:28:26.306114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.086 [2024-07-24 17:28:26.306123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.086 [2024-07-24 17:28:26.306206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.086 [2024-07-24 17:28:26.306223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:40.086 [2024-07-24 17:28:26.306250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.086 [2024-07-24 17:28:26.306259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.086 [2024-07-24 17:28:26.306278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.086 [2024-07-24 17:28:26.306290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:40.086 [2024-07-24 17:28:26.306300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.086 [2024-07-24 17:28:26.306309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.345 [2024-07-24 17:28:26.387737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.345 [2024-07-24 17:28:26.387792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:40.345 [2024-07-24 17:28:26.387823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.345 [2024-07-24 17:28:26.387833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.345 [2024-07-24 17:28:26.456561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.345 [2024-07-24 17:28:26.456613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:40.345 [2024-07-24 17:28:26.456644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.345 [2024-07-24 17:28:26.456671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.345 [2024-07-24 17:28:26.456777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.345 [2024-07-24 17:28:26.456800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:40.345 [2024-07-24 17:28:26.456812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.345 [2024-07-24 17:28:26.456821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.345 [2024-07-24 17:28:26.456885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.345 [2024-07-24 17:28:26.456901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:40.345 [2024-07-24 17:28:26.456912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.345 [2024-07-24 17:28:26.456944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.345 [2024-07-24 17:28:26.457057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.345 [2024-07-24 17:28:26.457103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:40.345 [2024-07-24 17:28:26.457120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.345 [2024-07-24 17:28:26.457131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.345 [2024-07-24 17:28:26.457175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.345 [2024-07-24 17:28:26.457190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:40.345 [2024-07-24 17:28:26.457201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.345 [2024-07-24 17:28:26.457211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.345 [2024-07-24 17:28:26.457254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.345 [2024-07-24 17:28:26.457268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:40.345 [2024-07-24 17:28:26.457286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.345 [2024-07-24 17:28:26.457295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.345 [2024-07-24 17:28:26.457342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.345 [2024-07-24 17:28:26.457358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:40.345 [2024-07-24 17:28:26.457368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.345 [2024-07-24 17:28:26.457378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.345 [2024-07-24 17:28:26.457510] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 367.914 ms, result 0 00:27:41.722 00:27:41.722 00:27:41.722 17:28:27 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:27:41.722 [2024-07-24 17:28:27.704045] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:27:41.722 [2024-07-24 17:28:27.704221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80817 ] 00:27:41.722 [2024-07-24 17:28:27.875724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.981 [2024-07-24 17:28:28.084477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.239 [2024-07-24 17:28:28.392635] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:42.239 [2024-07-24 17:28:28.392762] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:42.499 [2024-07-24 17:28:28.552326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.499 [2024-07-24 17:28:28.552391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:42.499 [2024-07-24 17:28:28.552413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:42.499 [2024-07-24 17:28:28.552425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.499 [2024-07-24 17:28:28.552507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.499 [2024-07-24 17:28:28.552554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:42.499 [2024-07-24 17:28:28.552582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:42.499 [2024-07-24 17:28:28.552598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.499 [2024-07-24 17:28:28.552633] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:42.499 [2024-07-24 17:28:28.553545] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:42.499 [2024-07-24 17:28:28.553578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.499 [2024-07-24 17:28:28.553592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:42.499 [2024-07-24 17:28:28.553620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:27:42.499 [2024-07-24 17:28:28.553629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.499 [2024-07-24 17:28:28.555494] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:42.499 [2024-07-24 17:28:28.570210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.499 [2024-07-24 17:28:28.570246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:42.499 [2024-07-24 17:28:28.570262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.717 ms 00:27:42.499 [2024-07-24 17:28:28.570271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.499 [2024-07-24 17:28:28.570336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.499 [2024-07-24 17:28:28.570356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:42.499 [2024-07-24 17:28:28.570367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:27:42.499 [2024-07-24 17:28:28.570376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.499 [2024-07-24 17:28:28.579190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.499 [2024-07-24 17:28:28.579229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:42.499 [2024-07-24 17:28:28.579245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.735 ms 00:27:42.499 [2024-07-24 17:28:28.579256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.499 [2024-07-24 17:28:28.579375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.499 [2024-07-24 17:28:28.579394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:42.499 [2024-07-24 17:28:28.579406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:27:42.499 [2024-07-24 17:28:28.579415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.499 [2024-07-24 17:28:28.579470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.499 [2024-07-24 17:28:28.579488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:42.499 [2024-07-24 17:28:28.579500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:42.499 [2024-07-24 17:28:28.579509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.499 [2024-07-24 17:28:28.579539] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:42.500 [2024-07-24 17:28:28.583964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.500 [2024-07-24 17:28:28.584010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:42.500 [2024-07-24 17:28:28.584024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.432 ms 00:27:42.500 [2024-07-24 17:28:28.584033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.500 [2024-07-24 17:28:28.584087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.500 [2024-07-24 17:28:28.584103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:42.500 [2024-07-24 17:28:28.584114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:42.500 [2024-07-24 17:28:28.584123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.500 [2024-07-24 17:28:28.584183] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:42.500 [2024-07-24 17:28:28.584214] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:42.500 [2024-07-24 17:28:28.584279] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:42.500 [2024-07-24 17:28:28.584314] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:42.500 [2024-07-24 17:28:28.584406] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:42.500 [2024-07-24 17:28:28.584420] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:42.500 [2024-07-24 17:28:28.584434] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:42.500 [2024-07-24 17:28:28.584448] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:42.500 [2024-07-24 17:28:28.584460] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:42.500 [2024-07-24 17:28:28.584471] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:42.500 [2024-07-24 17:28:28.584480] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:42.500 [2024-07-24 17:28:28.584491] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:42.500 [2024-07-24 17:28:28.584501] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:42.500 [2024-07-24 17:28:28.584512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.500 [2024-07-24 17:28:28.584527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:42.500 [2024-07-24 17:28:28.584538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:27:42.500 [2024-07-24 17:28:28.584548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.500 [2024-07-24 17:28:28.584672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.500 [2024-07-24 17:28:28.584688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:42.500 [2024-07-24 17:28:28.584700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:27:42.500 [2024-07-24 17:28:28.584711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.500 [2024-07-24 17:28:28.584835] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:42.500 [2024-07-24 17:28:28.584860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:42.500 [2024-07-24 17:28:28.584879] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:42.500 [2024-07-24 17:28:28.584899] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.500 [2024-07-24 17:28:28.584910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:42.500 [2024-07-24 17:28:28.584923] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:42.500 [2024-07-24 17:28:28.584934] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:42.500 [2024-07-24 17:28:28.584945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:42.500 [2024-07-24 17:28:28.584956] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:42.500 [2024-07-24 17:28:28.584966] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:42.500 [2024-07-24 17:28:28.584977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:42.500 [2024-07-24 17:28:28.584987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:42.500 [2024-07-24 17:28:28.584997] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:42.500 [2024-07-24 17:28:28.585023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:42.500 [2024-07-24 17:28:28.585047] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:42.500 [2024-07-24 17:28:28.585072] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.500 [2024-07-24 17:28:28.585095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:42.500 [2024-07-24 17:28:28.585105] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:42.500 [2024-07-24 17:28:28.585114] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.500 [2024-07-24 17:28:28.585125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:42.500 [2024-07-24 17:28:28.585147] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:42.500 [2024-07-24 17:28:28.585156] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:42.500 [2024-07-24 17:28:28.585166] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:42.500 [2024-07-24 17:28:28.585175] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:42.500 [2024-07-24 17:28:28.585184] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:42.500 [2024-07-24 17:28:28.585193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:42.500 [2024-07-24 17:28:28.585202] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:42.500 [2024-07-24 17:28:28.585211] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:42.500 [2024-07-24 17:28:28.585220] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:42.500 [2024-07-24 17:28:28.585229] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:42.500 [2024-07-24 17:28:28.585238] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:42.500 [2024-07-24 17:28:28.585262] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:42.500 [2024-07-24 17:28:28.585271] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:42.500 [2024-07-24 17:28:28.585280] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:42.500 [2024-07-24 17:28:28.585288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:42.500 [2024-07-24 17:28:28.585297] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:42.500 [2024-07-24 17:28:28.585306] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:42.500 [2024-07-24 17:28:28.585317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:42.500 [2024-07-24 17:28:28.585327] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:42.500 [2024-07-24 17:28:28.585336] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.500 [2024-07-24 17:28:28.585345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:42.500 [2024-07-24 17:28:28.585355] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:42.500 [2024-07-24 17:28:28.585364] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.500 [2024-07-24 17:28:28.585372] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:42.500 [2024-07-24 17:28:28.585382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:42.500 [2024-07-24 17:28:28.585391] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:42.500 [2024-07-24 17:28:28.585401] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.500 [2024-07-24 17:28:28.585412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:42.500 [2024-07-24 17:28:28.585422] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:42.500 [2024-07-24 17:28:28.585431] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:42.500 [2024-07-24 17:28:28.585440] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:42.500 [2024-07-24 17:28:28.585450] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:42.500 [2024-07-24 17:28:28.585459] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:42.500 [2024-07-24 17:28:28.585470] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:42.500 [2024-07-24 17:28:28.585483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:42.500 [2024-07-24 17:28:28.585496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:42.500 [2024-07-24 17:28:28.585506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:42.500 [2024-07-24 17:28:28.585516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:42.500 [2024-07-24 17:28:28.585526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:42.500 [2024-07-24 17:28:28.585536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:42.500 [2024-07-24 17:28:28.585545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:42.500 [2024-07-24 17:28:28.585555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:42.500 [2024-07-24 17:28:28.585565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:42.500 [2024-07-24 17:28:28.585575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:42.500 [2024-07-24 17:28:28.585585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:42.500 [2024-07-24 17:28:28.585594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:42.500 [2024-07-24 17:28:28.585603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:42.500 [2024-07-24 17:28:28.585614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:42.500 [2024-07-24 17:28:28.585624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:42.501 [2024-07-24 17:28:28.585634] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:42.501 [2024-07-24 17:28:28.585646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:42.501 [2024-07-24 17:28:28.585661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:42.501 [2024-07-24 17:28:28.585671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:42.501 [2024-07-24 17:28:28.585681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:42.501 [2024-07-24 17:28:28.585692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:42.501 [2024-07-24 17:28:28.585703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.501 [2024-07-24 17:28:28.585726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:42.501 [2024-07-24 17:28:28.585740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:27:42.501 [2024-07-24 17:28:28.585750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.501 [2024-07-24 17:28:28.632369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.501 [2024-07-24 17:28:28.632434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:42.501 [2024-07-24 17:28:28.632452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.542 ms 00:27:42.501 [2024-07-24 17:28:28.632463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.501 [2024-07-24 17:28:28.632575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.501 [2024-07-24 17:28:28.632591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:42.501 [2024-07-24 17:28:28.632603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:27:42.501 [2024-07-24 17:28:28.632613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.501 [2024-07-24 17:28:28.669044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.501 [2024-07-24 17:28:28.669101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:42.501 [2024-07-24 17:28:28.669118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.292 ms 00:27:42.501 [2024-07-24 17:28:28.669128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.501 [2024-07-24 17:28:28.669188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.501 [2024-07-24 17:28:28.669234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:42.501 [2024-07-24 17:28:28.669244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:42.501 [2024-07-24 17:28:28.669259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.501 [2024-07-24 17:28:28.669945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.501 [2024-07-24 17:28:28.669969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:42.501 [2024-07-24 17:28:28.669982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:27:42.501 [2024-07-24 17:28:28.669993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.501 [2024-07-24 17:28:28.670163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.501 [2024-07-24 17:28:28.670183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:42.501 [2024-07-24 17:28:28.670196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:27:42.501 [2024-07-24 17:28:28.670206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.501 [2024-07-24 17:28:28.685563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.501 [2024-07-24 17:28:28.685597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:42.501 [2024-07-24 17:28:28.685611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.298 ms 00:27:42.501 [2024-07-24 17:28:28.685625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.501 [2024-07-24 17:28:28.699346] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:42.501 [2024-07-24 17:28:28.699397] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:42.501 [2024-07-24 17:28:28.699412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.501 [2024-07-24 17:28:28.699424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:42.501 [2024-07-24 17:28:28.699435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.643 ms 00:27:42.501 [2024-07-24 17:28:28.699445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.501 [2024-07-24 17:28:28.723650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.501 [2024-07-24 17:28:28.723706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:42.501 [2024-07-24 17:28:28.723721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.157 ms 00:27:42.501 [2024-07-24 17:28:28.723731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.760 [2024-07-24 17:28:28.736788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.760 [2024-07-24 17:28:28.736837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:42.760 [2024-07-24 17:28:28.736851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.014 ms 00:27:42.760 [2024-07-24 17:28:28.736859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.760 [2024-07-24 17:28:28.749674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.760 [2024-07-24 17:28:28.749737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:42.760 [2024-07-24 17:28:28.749752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.776 ms 00:27:42.760 [2024-07-24 17:28:28.749761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.760 [2024-07-24 17:28:28.750557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.760 [2024-07-24 17:28:28.750620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:42.760 [2024-07-24 17:28:28.750634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:27:42.760 [2024-07-24 17:28:28.750643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.760 [2024-07-24 17:28:28.818047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.760 [2024-07-24 17:28:28.818115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:42.760 [2024-07-24 17:28:28.818133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.332 ms 00:27:42.760 [2024-07-24 17:28:28.818150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.760 [2024-07-24 17:28:28.828297] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:42.760 [2024-07-24 17:28:28.830625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.760 [2024-07-24 17:28:28.830675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:42.760 [2024-07-24 17:28:28.830690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.405 ms 00:27:42.760 [2024-07-24 17:28:28.830700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.760 [2024-07-24 17:28:28.830797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.760 [2024-07-24 17:28:28.830816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:42.760 [2024-07-24 17:28:28.830828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:42.760 [2024-07-24 17:28:28.830839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.760 [2024-07-24 17:28:28.831008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.760 [2024-07-24 17:28:28.831028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:42.760 [2024-07-24 17:28:28.831041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:27:42.760 [2024-07-24 17:28:28.831052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.760 [2024-07-24 17:28:28.831086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.760 [2024-07-24 17:28:28.831103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:42.760 [2024-07-24 17:28:28.831115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:42.760 [2024-07-24 17:28:28.831126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.760 [2024-07-24 17:28:28.831168] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:42.760 [2024-07-24 17:28:28.831186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.760 [2024-07-24 17:28:28.831203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:42.760 [2024-07-24 17:28:28.831215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:42.760 [2024-07-24 17:28:28.831227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.760 [2024-07-24 17:28:28.857736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.760 [2024-07-24 17:28:28.857793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:42.760 [2024-07-24 17:28:28.857808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.484 ms 00:27:42.760 [2024-07-24 17:28:28.857826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.760 [2024-07-24 17:28:28.857904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.760 [2024-07-24 17:28:28.857922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:42.760 [2024-07-24 17:28:28.857933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:42.760 [2024-07-24 17:28:28.857943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.760 [2024-07-24 17:28:28.859558] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 306.647 ms, result 0 00:28:26.679  Copying: 22/1024 [MB] (22 MBps) Copying: 45/1024 [MB] (22 MBps) Copying: 68/1024 [MB] (22 MBps) Copying: 91/1024 [MB] (22 MBps) Copying: 113/1024 [MB] (22 MBps) Copying: 136/1024 [MB] (22 MBps) Copying: 159/1024 [MB] (23 MBps) Copying: 183/1024 [MB] (23 MBps) Copying: 206/1024 [MB] (23 MBps) Copying: 230/1024 [MB] (23 MBps) Copying: 253/1024 [MB] (23 MBps) Copying: 277/1024 [MB] (23 MBps) Copying: 301/1024 [MB] (24 MBps) Copying: 325/1024 [MB] (23 MBps) Copying: 349/1024 [MB] (23 MBps) Copying: 373/1024 [MB] (24 MBps) Copying: 397/1024 [MB] (23 MBps) Copying: 421/1024 [MB] (23 MBps) Copying: 444/1024 [MB] (23 MBps) Copying: 468/1024 [MB] (23 MBps) Copying: 491/1024 [MB] (23 MBps) Copying: 514/1024 [MB] (23 MBps) Copying: 537/1024 [MB] (23 MBps) Copying: 561/1024 [MB] (23 MBps) Copying: 585/1024 [MB] (23 MBps) Copying: 608/1024 [MB] (23 MBps) Copying: 632/1024 [MB] (23 MBps) Copying: 655/1024 [MB] (23 MBps) Copying: 679/1024 [MB] (23 MBps) Copying: 703/1024 [MB] (23 MBps) Copying: 726/1024 [MB] (23 MBps) Copying: 750/1024 [MB] (23 MBps) Copying: 774/1024 [MB] (24 MBps) Copying: 798/1024 [MB] (23 MBps) Copying: 821/1024 [MB] (23 MBps) Copying: 845/1024 [MB] (23 MBps) Copying: 869/1024 [MB] (23 MBps) Copying: 892/1024 [MB] (23 MBps) Copying: 916/1024 [MB] (23 MBps) Copying: 940/1024 [MB] (23 MBps) Copying: 964/1024 [MB] (23 MBps) Copying: 987/1024 [MB] (23 MBps) Copying: 1010/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-24 17:29:12.661496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.680 [2024-07-24 17:29:12.661588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:26.680 [2024-07-24 17:29:12.661609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:26.680 [2024-07-24 17:29:12.661622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.680 [2024-07-24 17:29:12.661654] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:26.680 [2024-07-24 17:29:12.665331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.680 [2024-07-24 17:29:12.665360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:26.680 [2024-07-24 17:29:12.665374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.638 ms 00:28:26.680 [2024-07-24 17:29:12.665392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.680 [2024-07-24 17:29:12.665621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.680 [2024-07-24 17:29:12.665638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:26.680 [2024-07-24 17:29:12.665651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:28:26.680 [2024-07-24 17:29:12.665690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.680 [2024-07-24 17:29:12.669588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.680 [2024-07-24 17:29:12.669614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:26.680 [2024-07-24 17:29:12.669627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.875 ms 00:28:26.680 [2024-07-24 17:29:12.669637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.680 [2024-07-24 17:29:12.676508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.680 [2024-07-24 17:29:12.676536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:26.680 [2024-07-24 17:29:12.676551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.827 ms 00:28:26.680 [2024-07-24 17:29:12.676562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.680 [2024-07-24 17:29:12.706342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.680 [2024-07-24 17:29:12.706377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:26.680 [2024-07-24 17:29:12.706392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.681 ms 00:28:26.680 [2024-07-24 17:29:12.706402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.680 [2024-07-24 17:29:12.723794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.680 [2024-07-24 17:29:12.723826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:26.680 [2024-07-24 17:29:12.723855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.351 ms 00:28:26.680 [2024-07-24 17:29:12.723866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.680 [2024-07-24 17:29:12.724021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.680 [2024-07-24 17:29:12.724041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:26.680 [2024-07-24 17:29:12.724058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:28:26.680 [2024-07-24 17:29:12.724068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.680 [2024-07-24 17:29:12.751286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.680 [2024-07-24 17:29:12.751328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:26.680 [2024-07-24 17:29:12.751360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.182 ms 00:28:26.680 [2024-07-24 17:29:12.751369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.680 [2024-07-24 17:29:12.776977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.680 [2024-07-24 17:29:12.777017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:26.680 [2024-07-24 17:29:12.777047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.567 ms 00:28:26.680 [2024-07-24 17:29:12.777057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.680 [2024-07-24 17:29:12.801977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.680 [2024-07-24 17:29:12.802017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:26.680 [2024-07-24 17:29:12.802061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.880 ms 00:28:26.680 [2024-07-24 17:29:12.802077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.680 [2024-07-24 17:29:12.828197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.680 [2024-07-24 17:29:12.828238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:26.680 [2024-07-24 17:29:12.828269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.055 ms 00:28:26.680 [2024-07-24 17:29:12.828279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.680 [2024-07-24 17:29:12.828320] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:26.680 [2024-07-24 17:29:12.828342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:26.680 [2024-07-24 17:29:12.828900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.828911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.828922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.828933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.828944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.828955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.828966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.828977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.828988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.828998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:26.681 [2024-07-24 17:29:12.829599] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:26.681 [2024-07-24 17:29:12.829610] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 816f2449-3799-4931-8b3f-6dea6be81c44 00:28:26.681 [2024-07-24 17:29:12.829629] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:26.681 [2024-07-24 17:29:12.829640] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:26.681 [2024-07-24 17:29:12.829650] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:26.681 [2024-07-24 17:29:12.829662] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:26.681 [2024-07-24 17:29:12.829672] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:26.681 [2024-07-24 17:29:12.829684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:26.681 [2024-07-24 17:29:12.829695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:26.681 [2024-07-24 17:29:12.829704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:26.681 [2024-07-24 17:29:12.829714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:26.681 [2024-07-24 17:29:12.829726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.681 [2024-07-24 17:29:12.829738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:26.681 [2024-07-24 17:29:12.829766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.407 ms 00:28:26.681 [2024-07-24 17:29:12.829778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.681 [2024-07-24 17:29:12.844618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.681 [2024-07-24 17:29:12.844700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:26.681 [2024-07-24 17:29:12.844745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.800 ms 00:28:26.681 [2024-07-24 17:29:12.844756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.681 [2024-07-24 17:29:12.845266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.681 [2024-07-24 17:29:12.845312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:26.681 [2024-07-24 17:29:12.845343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:28:26.681 [2024-07-24 17:29:12.845353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.681 [2024-07-24 17:29:12.877149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.681 [2024-07-24 17:29:12.877189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:26.681 [2024-07-24 17:29:12.877219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.681 [2024-07-24 17:29:12.877229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.681 [2024-07-24 17:29:12.877280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.681 [2024-07-24 17:29:12.877294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:26.681 [2024-07-24 17:29:12.877305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.681 [2024-07-24 17:29:12.877315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.681 [2024-07-24 17:29:12.877406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.681 [2024-07-24 17:29:12.877424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:26.681 [2024-07-24 17:29:12.877435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.681 [2024-07-24 17:29:12.877445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.681 [2024-07-24 17:29:12.877465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.681 [2024-07-24 17:29:12.877477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:26.681 [2024-07-24 17:29:12.877488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.681 [2024-07-24 17:29:12.877497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.940 [2024-07-24 17:29:12.959963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.940 [2024-07-24 17:29:12.960020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:26.940 [2024-07-24 17:29:12.960053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.940 [2024-07-24 17:29:12.960063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.940 [2024-07-24 17:29:13.030276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.940 [2024-07-24 17:29:13.030326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:26.940 [2024-07-24 17:29:13.030358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.940 [2024-07-24 17:29:13.030368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.940 [2024-07-24 17:29:13.030441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.940 [2024-07-24 17:29:13.030457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:26.940 [2024-07-24 17:29:13.030468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.940 [2024-07-24 17:29:13.030478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.940 [2024-07-24 17:29:13.030549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.940 [2024-07-24 17:29:13.030566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:26.940 [2024-07-24 17:29:13.030577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.940 [2024-07-24 17:29:13.030586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.940 [2024-07-24 17:29:13.030752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.940 [2024-07-24 17:29:13.030779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:26.940 [2024-07-24 17:29:13.030791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.940 [2024-07-24 17:29:13.030817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.940 [2024-07-24 17:29:13.030866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.940 [2024-07-24 17:29:13.030883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:26.940 [2024-07-24 17:29:13.030895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.940 [2024-07-24 17:29:13.030906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.940 [2024-07-24 17:29:13.030976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.940 [2024-07-24 17:29:13.030999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:26.940 [2024-07-24 17:29:13.031010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.940 [2024-07-24 17:29:13.031020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.940 [2024-07-24 17:29:13.031106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.940 [2024-07-24 17:29:13.031138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:26.940 [2024-07-24 17:29:13.031166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.940 [2024-07-24 17:29:13.031176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.940 [2024-07-24 17:29:13.031332] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 369.819 ms, result 0 00:28:27.872 00:28:27.872 00:28:27.872 17:29:13 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:29.771 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:29.771 17:29:15 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:28:29.771 [2024-07-24 17:29:15.852789] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:28:29.771 [2024-07-24 17:29:15.852972] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81301 ] 00:28:30.029 [2024-07-24 17:29:16.025035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.029 [2024-07-24 17:29:16.254424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.596 [2024-07-24 17:29:16.552168] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:30.596 [2024-07-24 17:29:16.552432] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:30.596 [2024-07-24 17:29:16.712487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.596 [2024-07-24 17:29:16.712534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:30.596 [2024-07-24 17:29:16.712571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:30.596 [2024-07-24 17:29:16.712582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.596 [2024-07-24 17:29:16.712640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.596 [2024-07-24 17:29:16.712657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:30.596 [2024-07-24 17:29:16.712708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:30.596 [2024-07-24 17:29:16.712725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.596 [2024-07-24 17:29:16.712759] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:30.596 [2024-07-24 17:29:16.713533] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:30.596 [2024-07-24 17:29:16.713565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.596 [2024-07-24 17:29:16.713577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:30.596 [2024-07-24 17:29:16.713588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:28:30.596 [2024-07-24 17:29:16.713598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.596 [2024-07-24 17:29:16.715736] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:30.596 [2024-07-24 17:29:16.729879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.596 [2024-07-24 17:29:16.729920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:30.596 [2024-07-24 17:29:16.729953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.145 ms 00:28:30.596 [2024-07-24 17:29:16.729963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.596 [2024-07-24 17:29:16.730028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.596 [2024-07-24 17:29:16.730049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:30.596 [2024-07-24 17:29:16.730060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:30.596 [2024-07-24 17:29:16.730070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.596 [2024-07-24 17:29:16.738876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.596 [2024-07-24 17:29:16.738915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:30.596 [2024-07-24 17:29:16.738968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.725 ms 00:28:30.596 [2024-07-24 17:29:16.738988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.596 [2024-07-24 17:29:16.739081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.596 [2024-07-24 17:29:16.739099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:30.596 [2024-07-24 17:29:16.739110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:28:30.596 [2024-07-24 17:29:16.739121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.596 [2024-07-24 17:29:16.739176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.596 [2024-07-24 17:29:16.739192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:30.596 [2024-07-24 17:29:16.739204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:30.596 [2024-07-24 17:29:16.739215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.596 [2024-07-24 17:29:16.739246] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:30.596 [2024-07-24 17:29:16.743753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.596 [2024-07-24 17:29:16.743790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:30.596 [2024-07-24 17:29:16.743822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.515 ms 00:28:30.596 [2024-07-24 17:29:16.743832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.596 [2024-07-24 17:29:16.743875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.596 [2024-07-24 17:29:16.743889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:30.596 [2024-07-24 17:29:16.743901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:30.596 [2024-07-24 17:29:16.743910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.596 [2024-07-24 17:29:16.743975] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:30.596 [2024-07-24 17:29:16.744007] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:30.596 [2024-07-24 17:29:16.744047] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:30.597 [2024-07-24 17:29:16.744084] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:30.597 [2024-07-24 17:29:16.744173] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:30.597 [2024-07-24 17:29:16.744187] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:30.597 [2024-07-24 17:29:16.744223] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:30.597 [2024-07-24 17:29:16.744245] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:30.597 [2024-07-24 17:29:16.744257] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:30.597 [2024-07-24 17:29:16.744269] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:30.597 [2024-07-24 17:29:16.744278] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:30.597 [2024-07-24 17:29:16.744288] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:30.597 [2024-07-24 17:29:16.744298] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:30.597 [2024-07-24 17:29:16.744309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.597 [2024-07-24 17:29:16.744326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:30.597 [2024-07-24 17:29:16.744337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:28:30.597 [2024-07-24 17:29:16.744347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.597 [2024-07-24 17:29:16.744432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.597 [2024-07-24 17:29:16.744445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:30.597 [2024-07-24 17:29:16.744456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:30.597 [2024-07-24 17:29:16.744466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.597 [2024-07-24 17:29:16.744558] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:30.597 [2024-07-24 17:29:16.744573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:30.597 [2024-07-24 17:29:16.744590] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:30.597 [2024-07-24 17:29:16.744600] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.597 [2024-07-24 17:29:16.744610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:30.597 [2024-07-24 17:29:16.744619] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:30.597 [2024-07-24 17:29:16.744629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:30.597 [2024-07-24 17:29:16.744639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:30.597 [2024-07-24 17:29:16.744711] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:30.597 [2024-07-24 17:29:16.744724] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:30.597 [2024-07-24 17:29:16.744735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:30.597 [2024-07-24 17:29:16.744744] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:30.597 [2024-07-24 17:29:16.744754] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:30.597 [2024-07-24 17:29:16.744764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:30.597 [2024-07-24 17:29:16.744774] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:30.597 [2024-07-24 17:29:16.744785] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.597 [2024-07-24 17:29:16.744795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:30.597 [2024-07-24 17:29:16.744808] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:30.597 [2024-07-24 17:29:16.744817] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.597 [2024-07-24 17:29:16.744828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:30.597 [2024-07-24 17:29:16.744850] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:30.597 [2024-07-24 17:29:16.744861] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.597 [2024-07-24 17:29:16.744871] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:30.597 [2024-07-24 17:29:16.744880] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:30.597 [2024-07-24 17:29:16.744890] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.597 [2024-07-24 17:29:16.744900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:30.597 [2024-07-24 17:29:16.744909] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:30.597 [2024-07-24 17:29:16.744919] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.597 [2024-07-24 17:29:16.744928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:30.597 [2024-07-24 17:29:16.744938] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:30.597 [2024-07-24 17:29:16.744955] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.597 [2024-07-24 17:29:16.744964] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:30.597 [2024-07-24 17:29:16.744990] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:30.597 [2024-07-24 17:29:16.745016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:30.597 [2024-07-24 17:29:16.745026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:30.597 [2024-07-24 17:29:16.745050] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:30.597 [2024-07-24 17:29:16.745060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:30.597 [2024-07-24 17:29:16.745070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:30.597 [2024-07-24 17:29:16.745079] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:30.597 [2024-07-24 17:29:16.745089] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.597 [2024-07-24 17:29:16.745099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:30.597 [2024-07-24 17:29:16.745109] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:30.597 [2024-07-24 17:29:16.745118] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.597 [2024-07-24 17:29:16.745128] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:30.597 [2024-07-24 17:29:16.745139] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:30.597 [2024-07-24 17:29:16.745150] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:30.597 [2024-07-24 17:29:16.745160] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.597 [2024-07-24 17:29:16.745172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:30.597 [2024-07-24 17:29:16.745182] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:30.597 [2024-07-24 17:29:16.745193] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:30.597 [2024-07-24 17:29:16.745203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:30.597 [2024-07-24 17:29:16.745213] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:30.597 [2024-07-24 17:29:16.745223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:30.597 [2024-07-24 17:29:16.745235] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:30.597 [2024-07-24 17:29:16.745248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:30.597 [2024-07-24 17:29:16.745260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:30.597 [2024-07-24 17:29:16.745272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:30.597 [2024-07-24 17:29:16.745283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:30.597 [2024-07-24 17:29:16.745293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:30.597 [2024-07-24 17:29:16.745304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:30.597 [2024-07-24 17:29:16.745314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:30.597 [2024-07-24 17:29:16.745325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:30.597 [2024-07-24 17:29:16.745335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:30.597 [2024-07-24 17:29:16.745345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:30.597 [2024-07-24 17:29:16.745356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:30.597 [2024-07-24 17:29:16.745366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:30.597 [2024-07-24 17:29:16.745376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:30.597 [2024-07-24 17:29:16.745386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:30.597 [2024-07-24 17:29:16.745397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:30.597 [2024-07-24 17:29:16.745407] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:30.597 [2024-07-24 17:29:16.745419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:30.597 [2024-07-24 17:29:16.745436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:30.597 [2024-07-24 17:29:16.745447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:30.597 [2024-07-24 17:29:16.745459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:30.597 [2024-07-24 17:29:16.745470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:30.597 [2024-07-24 17:29:16.745482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.597 [2024-07-24 17:29:16.745494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:30.597 [2024-07-24 17:29:16.745505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:28:30.597 [2024-07-24 17:29:16.745516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.598 [2024-07-24 17:29:16.792069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.598 [2024-07-24 17:29:16.792121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:30.598 [2024-07-24 17:29:16.792158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.488 ms 00:28:30.598 [2024-07-24 17:29:16.792169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.598 [2024-07-24 17:29:16.792274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.598 [2024-07-24 17:29:16.792290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:30.598 [2024-07-24 17:29:16.792301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:28:30.598 [2024-07-24 17:29:16.792310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.598 [2024-07-24 17:29:16.828755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.598 [2024-07-24 17:29:16.828987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:30.598 [2024-07-24 17:29:16.829119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.362 ms 00:28:30.598 [2024-07-24 17:29:16.829166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.598 [2024-07-24 17:29:16.829309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.598 [2024-07-24 17:29:16.829361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:30.598 [2024-07-24 17:29:16.829378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:30.598 [2024-07-24 17:29:16.829397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.598 [2024-07-24 17:29:16.830086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.598 [2024-07-24 17:29:16.830105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:30.598 [2024-07-24 17:29:16.830117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:28:30.598 [2024-07-24 17:29:16.830127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.598 [2024-07-24 17:29:16.830282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.598 [2024-07-24 17:29:16.830300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:30.598 [2024-07-24 17:29:16.830312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:28:30.598 [2024-07-24 17:29:16.830322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.871 [2024-07-24 17:29:16.846235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.871 [2024-07-24 17:29:16.846273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:30.871 [2024-07-24 17:29:16.846305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.883 ms 00:28:30.871 [2024-07-24 17:29:16.846321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.871 [2024-07-24 17:29:16.860702] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:30.871 [2024-07-24 17:29:16.860744] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:30.871 [2024-07-24 17:29:16.860777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.871 [2024-07-24 17:29:16.860788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:30.871 [2024-07-24 17:29:16.860801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.340 ms 00:28:30.871 [2024-07-24 17:29:16.860810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.871 [2024-07-24 17:29:16.885603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.871 [2024-07-24 17:29:16.885677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:30.871 [2024-07-24 17:29:16.885711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.742 ms 00:28:30.871 [2024-07-24 17:29:16.885721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.871 [2024-07-24 17:29:16.898463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.871 [2024-07-24 17:29:16.898519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:30.871 [2024-07-24 17:29:16.898552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.698 ms 00:28:30.871 [2024-07-24 17:29:16.898562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.871 [2024-07-24 17:29:16.911120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.871 [2024-07-24 17:29:16.911160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:30.871 [2024-07-24 17:29:16.911192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.518 ms 00:28:30.871 [2024-07-24 17:29:16.911201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.871 [2024-07-24 17:29:16.911935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.871 [2024-07-24 17:29:16.911962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:30.871 [2024-07-24 17:29:16.911976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:28:30.871 [2024-07-24 17:29:16.911986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.871 [2024-07-24 17:29:16.984737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.871 [2024-07-24 17:29:16.984795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:30.871 [2024-07-24 17:29:16.984832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.723 ms 00:28:30.871 [2024-07-24 17:29:16.984851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.871 [2024-07-24 17:29:16.995736] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:30.871 [2024-07-24 17:29:16.998575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.871 [2024-07-24 17:29:16.998608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:30.871 [2024-07-24 17:29:16.998625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.664 ms 00:28:30.871 [2024-07-24 17:29:16.998637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.871 [2024-07-24 17:29:16.998801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.871 [2024-07-24 17:29:16.998822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:30.871 [2024-07-24 17:29:16.998836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:30.871 [2024-07-24 17:29:16.998864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.871 [2024-07-24 17:29:16.998993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.871 [2024-07-24 17:29:16.999013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:30.871 [2024-07-24 17:29:16.999026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:28:30.871 [2024-07-24 17:29:16.999039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.871 [2024-07-24 17:29:16.999074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.871 [2024-07-24 17:29:16.999088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:30.871 [2024-07-24 17:29:16.999102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:30.871 [2024-07-24 17:29:16.999120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.871 [2024-07-24 17:29:16.999162] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:30.871 [2024-07-24 17:29:16.999180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.871 [2024-07-24 17:29:16.999196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:30.871 [2024-07-24 17:29:16.999209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:30.871 [2024-07-24 17:29:16.999220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.871 [2024-07-24 17:29:17.027122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.871 [2024-07-24 17:29:17.027165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:30.871 [2024-07-24 17:29:17.027200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.877 ms 00:28:30.871 [2024-07-24 17:29:17.027218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.871 [2024-07-24 17:29:17.027314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.871 [2024-07-24 17:29:17.027332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:30.871 [2024-07-24 17:29:17.027344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:30.871 [2024-07-24 17:29:17.027355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.871 [2024-07-24 17:29:17.028904] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 315.779 ms, result 0 00:29:16.384  Copying: 22/1024 [MB] (22 MBps) Copying: 45/1024 [MB] (23 MBps) Copying: 68/1024 [MB] (23 MBps) Copying: 92/1024 [MB] (23 MBps) Copying: 115/1024 [MB] (23 MBps) Copying: 138/1024 [MB] (23 MBps) Copying: 162/1024 [MB] (23 MBps) Copying: 185/1024 [MB] (22 MBps) Copying: 207/1024 [MB] (22 MBps) Copying: 230/1024 [MB] (22 MBps) Copying: 254/1024 [MB] (23 MBps) Copying: 277/1024 [MB] (23 MBps) Copying: 300/1024 [MB] (23 MBps) Copying: 323/1024 [MB] (22 MBps) Copying: 345/1024 [MB] (22 MBps) Copying: 367/1024 [MB] (22 MBps) Copying: 390/1024 [MB] (22 MBps) Copying: 413/1024 [MB] (23 MBps) Copying: 436/1024 [MB] (23 MBps) Copying: 459/1024 [MB] (22 MBps) Copying: 483/1024 [MB] (23 MBps) Copying: 506/1024 [MB] (23 MBps) Copying: 529/1024 [MB] (23 MBps) Copying: 552/1024 [MB] (22 MBps) Copying: 575/1024 [MB] (22 MBps) Copying: 598/1024 [MB] (22 MBps) Copying: 621/1024 [MB] (22 MBps) Copying: 644/1024 [MB] (23 MBps) Copying: 667/1024 [MB] (23 MBps) Copying: 690/1024 [MB] (23 MBps) Copying: 713/1024 [MB] (23 MBps) Copying: 737/1024 [MB] (23 MBps) Copying: 759/1024 [MB] (22 MBps) Copying: 783/1024 [MB] (23 MBps) Copying: 806/1024 [MB] (23 MBps) Copying: 828/1024 [MB] (22 MBps) Copying: 852/1024 [MB] (23 MBps) Copying: 876/1024 [MB] (23 MBps) Copying: 899/1024 [MB] (23 MBps) Copying: 922/1024 [MB] (22 MBps) Copying: 945/1024 [MB] (23 MBps) Copying: 968/1024 [MB] (23 MBps) Copying: 992/1024 [MB] (23 MBps) Copying: 1015/1024 [MB] (23 MBps) Copying: 1048208/1048576 [kB] (8440 kBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-24 17:30:02.426637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.384 [2024-07-24 17:30:02.426782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:16.384 [2024-07-24 17:30:02.426821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:16.384 [2024-07-24 17:30:02.426834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.384 [2024-07-24 17:30:02.428473] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:16.384 [2024-07-24 17:30:02.434649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.384 [2024-07-24 17:30:02.434697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:16.384 [2024-07-24 17:30:02.434729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.119 ms 00:29:16.384 [2024-07-24 17:30:02.434739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.384 [2024-07-24 17:30:02.449958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.384 [2024-07-24 17:30:02.450018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:16.384 [2024-07-24 17:30:02.450053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.808 ms 00:29:16.384 [2024-07-24 17:30:02.450063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.384 [2024-07-24 17:30:02.471019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.384 [2024-07-24 17:30:02.471067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:16.384 [2024-07-24 17:30:02.471086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.927 ms 00:29:16.384 [2024-07-24 17:30:02.471098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.384 [2024-07-24 17:30:02.477465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.384 [2024-07-24 17:30:02.477499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:16.384 [2024-07-24 17:30:02.477528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.327 ms 00:29:16.384 [2024-07-24 17:30:02.477538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.384 [2024-07-24 17:30:02.508889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.384 [2024-07-24 17:30:02.508935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:16.384 [2024-07-24 17:30:02.508969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.285 ms 00:29:16.384 [2024-07-24 17:30:02.508981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.384 [2024-07-24 17:30:02.526555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.384 [2024-07-24 17:30:02.526664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:16.384 [2024-07-24 17:30:02.526683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.500 ms 00:29:16.384 [2024-07-24 17:30:02.526695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.644 [2024-07-24 17:30:02.627059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.644 [2024-07-24 17:30:02.627130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:16.644 [2024-07-24 17:30:02.627152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.296 ms 00:29:16.644 [2024-07-24 17:30:02.627165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.644 [2024-07-24 17:30:02.659280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.644 [2024-07-24 17:30:02.659361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:16.644 [2024-07-24 17:30:02.659411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.094 ms 00:29:16.644 [2024-07-24 17:30:02.659423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.644 [2024-07-24 17:30:02.689630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.644 [2024-07-24 17:30:02.689713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:16.644 [2024-07-24 17:30:02.689747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.163 ms 00:29:16.644 [2024-07-24 17:30:02.689759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.644 [2024-07-24 17:30:02.719025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.644 [2024-07-24 17:30:02.719071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:16.644 [2024-07-24 17:30:02.719102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.208 ms 00:29:16.644 [2024-07-24 17:30:02.719114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.644 [2024-07-24 17:30:02.748453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.644 [2024-07-24 17:30:02.748497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:16.644 [2024-07-24 17:30:02.748529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.246 ms 00:29:16.644 [2024-07-24 17:30:02.748540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.644 [2024-07-24 17:30:02.748581] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:16.644 [2024-07-24 17:30:02.748603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 103424 / 261120 wr_cnt: 1 state: open 00:29:16.644 [2024-07-24 17:30:02.748618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.748999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:16.644 [2024-07-24 17:30:02.749922] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:16.644 [2024-07-24 17:30:02.749933] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 816f2449-3799-4931-8b3f-6dea6be81c44 00:29:16.644 [2024-07-24 17:30:02.749946] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 103424 00:29:16.644 [2024-07-24 17:30:02.749957] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 104384 00:29:16.644 [2024-07-24 17:30:02.749969] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 103424 00:29:16.644 [2024-07-24 17:30:02.749988] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0093 00:29:16.644 [2024-07-24 17:30:02.749999] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:16.644 [2024-07-24 17:30:02.750010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:16.644 [2024-07-24 17:30:02.750025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:16.644 [2024-07-24 17:30:02.750035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:16.644 [2024-07-24 17:30:02.750045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:16.644 [2024-07-24 17:30:02.750056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.644 [2024-07-24 17:30:02.750068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:16.644 [2024-07-24 17:30:02.750080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.477 ms 00:29:16.644 [2024-07-24 17:30:02.750091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.644 [2024-07-24 17:30:02.766900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.644 [2024-07-24 17:30:02.766941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:16.644 [2024-07-24 17:30:02.766997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.768 ms 00:29:16.644 [2024-07-24 17:30:02.767010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.644 [2024-07-24 17:30:02.767552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.644 [2024-07-24 17:30:02.767573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:16.644 [2024-07-24 17:30:02.767587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:29:16.644 [2024-07-24 17:30:02.767598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.644 [2024-07-24 17:30:02.803743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.644 [2024-07-24 17:30:02.803795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:16.644 [2024-07-24 17:30:02.803832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.644 [2024-07-24 17:30:02.803843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.644 [2024-07-24 17:30:02.803911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.644 [2024-07-24 17:30:02.803925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:16.644 [2024-07-24 17:30:02.803937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.644 [2024-07-24 17:30:02.803951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.644 [2024-07-24 17:30:02.804045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.644 [2024-07-24 17:30:02.804063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:16.644 [2024-07-24 17:30:02.804074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.644 [2024-07-24 17:30:02.804091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.644 [2024-07-24 17:30:02.804111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.644 [2024-07-24 17:30:02.804123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:16.644 [2024-07-24 17:30:02.804133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.644 [2024-07-24 17:30:02.804142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.903 [2024-07-24 17:30:02.898438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.903 [2024-07-24 17:30:02.898510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:16.903 [2024-07-24 17:30:02.898544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.903 [2024-07-24 17:30:02.898562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.903 [2024-07-24 17:30:02.974623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.903 [2024-07-24 17:30:02.974719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:16.903 [2024-07-24 17:30:02.974765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.903 [2024-07-24 17:30:02.974777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.903 [2024-07-24 17:30:02.974879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.903 [2024-07-24 17:30:02.974896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:16.903 [2024-07-24 17:30:02.974907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.903 [2024-07-24 17:30:02.974918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.903 [2024-07-24 17:30:02.974996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.903 [2024-07-24 17:30:02.975014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:16.903 [2024-07-24 17:30:02.975027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.903 [2024-07-24 17:30:02.975037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.903 [2024-07-24 17:30:02.975162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.903 [2024-07-24 17:30:02.975181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:16.903 [2024-07-24 17:30:02.975194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.903 [2024-07-24 17:30:02.975205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.903 [2024-07-24 17:30:02.975252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.903 [2024-07-24 17:30:02.975275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:16.903 [2024-07-24 17:30:02.975303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.903 [2024-07-24 17:30:02.975314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.903 [2024-07-24 17:30:02.975371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.903 [2024-07-24 17:30:02.975391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:16.903 [2024-07-24 17:30:02.975404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.903 [2024-07-24 17:30:02.975414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.903 [2024-07-24 17:30:02.975470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.903 [2024-07-24 17:30:02.975486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:16.903 [2024-07-24 17:30:02.975497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.903 [2024-07-24 17:30:02.975508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.903 [2024-07-24 17:30:02.975664] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 549.591 ms, result 0 00:29:18.283 00:29:18.283 00:29:18.283 17:30:04 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:29:18.569 [2024-07-24 17:30:04.566509] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:29:18.569 [2024-07-24 17:30:04.566709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81780 ] 00:29:18.569 [2024-07-24 17:30:04.738924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.827 [2024-07-24 17:30:04.940093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.085 [2024-07-24 17:30:05.259079] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:19.085 [2024-07-24 17:30:05.259148] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:19.344 [2024-07-24 17:30:05.420116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.344 [2024-07-24 17:30:05.420166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:19.344 [2024-07-24 17:30:05.420202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:19.344 [2024-07-24 17:30:05.420212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.344 [2024-07-24 17:30:05.420270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.344 [2024-07-24 17:30:05.420286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:19.344 [2024-07-24 17:30:05.420298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:29:19.344 [2024-07-24 17:30:05.420311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.344 [2024-07-24 17:30:05.420342] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:19.344 [2024-07-24 17:30:05.421153] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:19.344 [2024-07-24 17:30:05.421184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.344 [2024-07-24 17:30:05.421196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:19.344 [2024-07-24 17:30:05.421207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.852 ms 00:29:19.344 [2024-07-24 17:30:05.421217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.344 [2024-07-24 17:30:05.423185] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:19.344 [2024-07-24 17:30:05.438385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.344 [2024-07-24 17:30:05.438425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:19.344 [2024-07-24 17:30:05.438457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.202 ms 00:29:19.344 [2024-07-24 17:30:05.438468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.344 [2024-07-24 17:30:05.438532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.344 [2024-07-24 17:30:05.438552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:19.344 [2024-07-24 17:30:05.438563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:29:19.344 [2024-07-24 17:30:05.438572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.344 [2024-07-24 17:30:05.447269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.344 [2024-07-24 17:30:05.447325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:19.344 [2024-07-24 17:30:05.447356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.581 ms 00:29:19.344 [2024-07-24 17:30:05.447365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.344 [2024-07-24 17:30:05.447462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.344 [2024-07-24 17:30:05.447480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:19.344 [2024-07-24 17:30:05.447491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:19.344 [2024-07-24 17:30:05.447501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.344 [2024-07-24 17:30:05.447559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.344 [2024-07-24 17:30:05.447575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:19.344 [2024-07-24 17:30:05.447586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:19.344 [2024-07-24 17:30:05.447596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.344 [2024-07-24 17:30:05.447627] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:19.344 [2024-07-24 17:30:05.452184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.344 [2024-07-24 17:30:05.452216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:19.344 [2024-07-24 17:30:05.452245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.565 ms 00:29:19.344 [2024-07-24 17:30:05.452255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.344 [2024-07-24 17:30:05.452294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.344 [2024-07-24 17:30:05.452308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:19.344 [2024-07-24 17:30:05.452320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:19.344 [2024-07-24 17:30:05.452329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.344 [2024-07-24 17:30:05.452391] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:19.344 [2024-07-24 17:30:05.452422] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:19.344 [2024-07-24 17:30:05.452459] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:19.344 [2024-07-24 17:30:05.452480] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:19.344 [2024-07-24 17:30:05.452569] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:19.344 [2024-07-24 17:30:05.452583] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:19.344 [2024-07-24 17:30:05.452595] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:19.344 [2024-07-24 17:30:05.452608] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:19.344 [2024-07-24 17:30:05.452620] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:19.344 [2024-07-24 17:30:05.452631] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:19.344 [2024-07-24 17:30:05.452640] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:19.344 [2024-07-24 17:30:05.452650] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:19.344 [2024-07-24 17:30:05.452678] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:19.344 [2024-07-24 17:30:05.452708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.344 [2024-07-24 17:30:05.452723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:19.344 [2024-07-24 17:30:05.452734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:29:19.344 [2024-07-24 17:30:05.452743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.344 [2024-07-24 17:30:05.452825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.344 [2024-07-24 17:30:05.452839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:19.344 [2024-07-24 17:30:05.452850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:19.344 [2024-07-24 17:30:05.452859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.344 [2024-07-24 17:30:05.452951] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:19.344 [2024-07-24 17:30:05.452967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:19.344 [2024-07-24 17:30:05.452997] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:19.344 [2024-07-24 17:30:05.453008] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.344 [2024-07-24 17:30:05.453019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:19.344 [2024-07-24 17:30:05.453028] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:19.344 [2024-07-24 17:30:05.453037] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:19.345 [2024-07-24 17:30:05.453046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:19.345 [2024-07-24 17:30:05.453055] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:19.345 [2024-07-24 17:30:05.453064] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:19.345 [2024-07-24 17:30:05.453073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:19.345 [2024-07-24 17:30:05.453082] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:19.345 [2024-07-24 17:30:05.453091] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:19.345 [2024-07-24 17:30:05.453103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:19.345 [2024-07-24 17:30:05.453113] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:19.345 [2024-07-24 17:30:05.453122] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.345 [2024-07-24 17:30:05.453131] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:19.345 [2024-07-24 17:30:05.453141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:19.345 [2024-07-24 17:30:05.453150] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.345 [2024-07-24 17:30:05.453159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:19.345 [2024-07-24 17:30:05.453181] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:19.345 [2024-07-24 17:30:05.453190] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:19.345 [2024-07-24 17:30:05.453199] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:19.345 [2024-07-24 17:30:05.453209] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:19.345 [2024-07-24 17:30:05.453218] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:19.345 [2024-07-24 17:30:05.453227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:19.345 [2024-07-24 17:30:05.453236] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:19.345 [2024-07-24 17:30:05.453246] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:19.345 [2024-07-24 17:30:05.453255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:19.345 [2024-07-24 17:30:05.453264] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:19.345 [2024-07-24 17:30:05.453273] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:19.345 [2024-07-24 17:30:05.453282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:19.345 [2024-07-24 17:30:05.453291] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:19.345 [2024-07-24 17:30:05.453300] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:19.345 [2024-07-24 17:30:05.453309] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:19.345 [2024-07-24 17:30:05.453318] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:19.345 [2024-07-24 17:30:05.453327] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:19.345 [2024-07-24 17:30:05.453336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:19.345 [2024-07-24 17:30:05.453345] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:19.345 [2024-07-24 17:30:05.453354] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.345 [2024-07-24 17:30:05.453363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:19.345 [2024-07-24 17:30:05.453373] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:19.345 [2024-07-24 17:30:05.453381] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.345 [2024-07-24 17:30:05.453390] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:19.345 [2024-07-24 17:30:05.453400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:19.345 [2024-07-24 17:30:05.453411] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:19.345 [2024-07-24 17:30:05.453421] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.345 [2024-07-24 17:30:05.453431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:19.345 [2024-07-24 17:30:05.453441] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:19.345 [2024-07-24 17:30:05.453450] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:19.345 [2024-07-24 17:30:05.453460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:19.345 [2024-07-24 17:30:05.453469] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:19.345 [2024-07-24 17:30:05.453478] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:19.345 [2024-07-24 17:30:05.453489] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:19.345 [2024-07-24 17:30:05.453502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:19.345 [2024-07-24 17:30:05.453514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:19.345 [2024-07-24 17:30:05.453524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:19.345 [2024-07-24 17:30:05.453534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:19.345 [2024-07-24 17:30:05.453544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:19.345 [2024-07-24 17:30:05.453555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:19.345 [2024-07-24 17:30:05.453565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:19.345 [2024-07-24 17:30:05.453575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:19.345 [2024-07-24 17:30:05.453585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:19.345 [2024-07-24 17:30:05.453595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:19.345 [2024-07-24 17:30:05.453605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:19.345 [2024-07-24 17:30:05.453615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:19.345 [2024-07-24 17:30:05.453625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:19.345 [2024-07-24 17:30:05.453635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:19.345 [2024-07-24 17:30:05.453645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:19.345 [2024-07-24 17:30:05.453655] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:19.345 [2024-07-24 17:30:05.453666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:19.345 [2024-07-24 17:30:05.453697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:19.345 [2024-07-24 17:30:05.453708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:19.345 [2024-07-24 17:30:05.453718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:19.345 [2024-07-24 17:30:05.453728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:19.345 [2024-07-24 17:30:05.453739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.345 [2024-07-24 17:30:05.453750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:19.345 [2024-07-24 17:30:05.453761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:29:19.345 [2024-07-24 17:30:05.453771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.345 [2024-07-24 17:30:05.502121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.345 [2024-07-24 17:30:05.502186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:19.345 [2024-07-24 17:30:05.502221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.289 ms 00:29:19.345 [2024-07-24 17:30:05.502233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.345 [2024-07-24 17:30:05.502349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.345 [2024-07-24 17:30:05.502365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:19.345 [2024-07-24 17:30:05.502393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:29:19.345 [2024-07-24 17:30:05.502419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.345 [2024-07-24 17:30:05.541982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.345 [2024-07-24 17:30:05.542026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:19.345 [2024-07-24 17:30:05.542042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.468 ms 00:29:19.345 [2024-07-24 17:30:05.542052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.345 [2024-07-24 17:30:05.542098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.345 [2024-07-24 17:30:05.542113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:19.345 [2024-07-24 17:30:05.542123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:19.345 [2024-07-24 17:30:05.542138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.345 [2024-07-24 17:30:05.542796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.345 [2024-07-24 17:30:05.542814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:19.345 [2024-07-24 17:30:05.542826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:29:19.345 [2024-07-24 17:30:05.542836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.345 [2024-07-24 17:30:05.543018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.345 [2024-07-24 17:30:05.543038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:19.345 [2024-07-24 17:30:05.543052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:29:19.345 [2024-07-24 17:30:05.543064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.345 [2024-07-24 17:30:05.558794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.346 [2024-07-24 17:30:05.558831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:19.346 [2024-07-24 17:30:05.558846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.698 ms 00:29:19.346 [2024-07-24 17:30:05.558860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.346 [2024-07-24 17:30:05.573243] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:19.346 [2024-07-24 17:30:05.573282] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:19.346 [2024-07-24 17:30:05.573298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.346 [2024-07-24 17:30:05.573308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:19.346 [2024-07-24 17:30:05.573319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.284 ms 00:29:19.346 [2024-07-24 17:30:05.573328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.606 [2024-07-24 17:30:05.600918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.606 [2024-07-24 17:30:05.600962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:19.606 [2024-07-24 17:30:05.600993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.551 ms 00:29:19.606 [2024-07-24 17:30:05.601004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.606 [2024-07-24 17:30:05.613771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.606 [2024-07-24 17:30:05.613807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:19.606 [2024-07-24 17:30:05.613821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.719 ms 00:29:19.606 [2024-07-24 17:30:05.613831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.606 [2024-07-24 17:30:05.626149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.606 [2024-07-24 17:30:05.626186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:19.606 [2024-07-24 17:30:05.626200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.281 ms 00:29:19.606 [2024-07-24 17:30:05.626209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.606 [2024-07-24 17:30:05.626876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.606 [2024-07-24 17:30:05.626934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:19.606 [2024-07-24 17:30:05.626954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:29:19.606 [2024-07-24 17:30:05.626982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.606 [2024-07-24 17:30:05.704259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.606 [2024-07-24 17:30:05.704333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:19.606 [2024-07-24 17:30:05.704353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.246 ms 00:29:19.606 [2024-07-24 17:30:05.704369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.606 [2024-07-24 17:30:05.714390] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:19.606 [2024-07-24 17:30:05.716787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.606 [2024-07-24 17:30:05.716817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:19.606 [2024-07-24 17:30:05.716832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.352 ms 00:29:19.606 [2024-07-24 17:30:05.716841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.606 [2024-07-24 17:30:05.716930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.606 [2024-07-24 17:30:05.716947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:19.606 [2024-07-24 17:30:05.716959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:19.606 [2024-07-24 17:30:05.716968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.606 [2024-07-24 17:30:05.718691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.606 [2024-07-24 17:30:05.718735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:19.606 [2024-07-24 17:30:05.718764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:29:19.606 [2024-07-24 17:30:05.718774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.606 [2024-07-24 17:30:05.718801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.606 [2024-07-24 17:30:05.718814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:19.606 [2024-07-24 17:30:05.718824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:19.606 [2024-07-24 17:30:05.718833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.606 [2024-07-24 17:30:05.718873] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:19.606 [2024-07-24 17:30:05.718888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.606 [2024-07-24 17:30:05.718902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:19.606 [2024-07-24 17:30:05.718912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:19.606 [2024-07-24 17:30:05.718921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.606 [2024-07-24 17:30:05.744209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.606 [2024-07-24 17:30:05.744247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:19.606 [2024-07-24 17:30:05.744278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.266 ms 00:29:19.606 [2024-07-24 17:30:05.744295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.606 [2024-07-24 17:30:05.744372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.606 [2024-07-24 17:30:05.744390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:19.606 [2024-07-24 17:30:05.744401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:19.606 [2024-07-24 17:30:05.744410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.606 [2024-07-24 17:30:05.748080] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 326.995 ms, result 0 00:30:04.887  Copying: 18/1024 [MB] (18 MBps) Copying: 41/1024 [MB] (22 MBps) Copying: 64/1024 [MB] (23 MBps) Copying: 87/1024 [MB] (23 MBps) Copying: 110/1024 [MB] (23 MBps) Copying: 133/1024 [MB] (23 MBps) Copying: 156/1024 [MB] (22 MBps) Copying: 179/1024 [MB] (22 MBps) Copying: 201/1024 [MB] (22 MBps) Copying: 223/1024 [MB] (22 MBps) Copying: 246/1024 [MB] (22 MBps) Copying: 268/1024 [MB] (22 MBps) Copying: 291/1024 [MB] (22 MBps) Copying: 314/1024 [MB] (22 MBps) Copying: 336/1024 [MB] (22 MBps) Copying: 358/1024 [MB] (21 MBps) Copying: 376/1024 [MB] (18 MBps) Copying: 398/1024 [MB] (21 MBps) Copying: 420/1024 [MB] (21 MBps) Copying: 441/1024 [MB] (21 MBps) Copying: 464/1024 [MB] (22 MBps) Copying: 487/1024 [MB] (23 MBps) Copying: 510/1024 [MB] (23 MBps) Copying: 533/1024 [MB] (22 MBps) Copying: 559/1024 [MB] (25 MBps) Copying: 581/1024 [MB] (22 MBps) Copying: 604/1024 [MB] (23 MBps) Copying: 628/1024 [MB] (24 MBps) Copying: 654/1024 [MB] (25 MBps) Copying: 678/1024 [MB] (24 MBps) Copying: 701/1024 [MB] (22 MBps) Copying: 725/1024 [MB] (23 MBps) Copying: 748/1024 [MB] (23 MBps) Copying: 771/1024 [MB] (22 MBps) Copying: 793/1024 [MB] (22 MBps) Copying: 817/1024 [MB] (23 MBps) Copying: 843/1024 [MB] (26 MBps) Copying: 869/1024 [MB] (25 MBps) Copying: 892/1024 [MB] (23 MBps) Copying: 914/1024 [MB] (22 MBps) Copying: 937/1024 [MB] (22 MBps) Copying: 959/1024 [MB] (22 MBps) Copying: 982/1024 [MB] (22 MBps) Copying: 1005/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-24 17:30:51.059360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.887 [2024-07-24 17:30:51.059440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:04.887 [2024-07-24 17:30:51.059486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:04.887 [2024-07-24 17:30:51.059497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.887 [2024-07-24 17:30:51.059534] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:04.887 [2024-07-24 17:30:51.063255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.887 [2024-07-24 17:30:51.063335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:04.887 [2024-07-24 17:30:51.063350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.701 ms 00:30:04.887 [2024-07-24 17:30:51.063360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.887 [2024-07-24 17:30:51.063608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.887 [2024-07-24 17:30:51.063624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:04.887 [2024-07-24 17:30:51.063635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:30:04.887 [2024-07-24 17:30:51.063644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.888 [2024-07-24 17:30:51.068924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.888 [2024-07-24 17:30:51.068963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:04.888 [2024-07-24 17:30:51.068992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.256 ms 00:30:04.888 [2024-07-24 17:30:51.069004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.888 [2024-07-24 17:30:51.075062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.888 [2024-07-24 17:30:51.075096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:04.888 [2024-07-24 17:30:51.075125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.020 ms 00:30:04.888 [2024-07-24 17:30:51.075135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.888 [2024-07-24 17:30:51.102668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.888 [2024-07-24 17:30:51.102706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:04.888 [2024-07-24 17:30:51.102722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.459 ms 00:30:04.888 [2024-07-24 17:30:51.102731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.888 [2024-07-24 17:30:51.118211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.888 [2024-07-24 17:30:51.118249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:04.888 [2024-07-24 17:30:51.118270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.442 ms 00:30:04.888 [2024-07-24 17:30:51.118280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.147 [2024-07-24 17:30:51.245267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.147 [2024-07-24 17:30:51.245314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:05.147 [2024-07-24 17:30:51.245362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 126.961 ms 00:30:05.147 [2024-07-24 17:30:51.245373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.147 [2024-07-24 17:30:51.272034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.147 [2024-07-24 17:30:51.272072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:30:05.147 [2024-07-24 17:30:51.272102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.640 ms 00:30:05.147 [2024-07-24 17:30:51.272112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.147 [2024-07-24 17:30:51.296791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.147 [2024-07-24 17:30:51.296829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:30:05.147 [2024-07-24 17:30:51.296859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.642 ms 00:30:05.147 [2024-07-24 17:30:51.296869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.147 [2024-07-24 17:30:51.321559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.148 [2024-07-24 17:30:51.321595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:05.148 [2024-07-24 17:30:51.321624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.654 ms 00:30:05.148 [2024-07-24 17:30:51.321647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.148 [2024-07-24 17:30:51.346329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.148 [2024-07-24 17:30:51.346366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:05.148 [2024-07-24 17:30:51.346396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.587 ms 00:30:05.148 [2024-07-24 17:30:51.346405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.148 [2024-07-24 17:30:51.346442] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:05.148 [2024-07-24 17:30:51.346463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:30:05.148 [2024-07-24 17:30:51.346476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.346999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:05.148 [2024-07-24 17:30:51.347443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:05.149 [2024-07-24 17:30:51.347669] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:05.149 [2024-07-24 17:30:51.347680] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 816f2449-3799-4931-8b3f-6dea6be81c44 00:30:05.149 [2024-07-24 17:30:51.347699] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:30:05.149 [2024-07-24 17:30:51.347712] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 31424 00:30:05.149 [2024-07-24 17:30:51.347722] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 30464 00:30:05.149 [2024-07-24 17:30:51.347739] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0315 00:30:05.149 [2024-07-24 17:30:51.347749] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:05.149 [2024-07-24 17:30:51.347760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:05.149 [2024-07-24 17:30:51.347773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:05.149 [2024-07-24 17:30:51.347782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:05.149 [2024-07-24 17:30:51.347791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:05.149 [2024-07-24 17:30:51.347801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.149 [2024-07-24 17:30:51.347812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:05.149 [2024-07-24 17:30:51.347823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.360 ms 00:30:05.149 [2024-07-24 17:30:51.347833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.149 [2024-07-24 17:30:51.362450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.149 [2024-07-24 17:30:51.362486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:05.149 [2024-07-24 17:30:51.362501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.596 ms 00:30:05.149 [2024-07-24 17:30:51.362523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.149 [2024-07-24 17:30:51.363101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.149 [2024-07-24 17:30:51.363127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:05.149 [2024-07-24 17:30:51.363141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:30:05.149 [2024-07-24 17:30:51.363151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.408 [2024-07-24 17:30:51.395589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.408 [2024-07-24 17:30:51.395627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:05.408 [2024-07-24 17:30:51.395677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.408 [2024-07-24 17:30:51.395705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.408 [2024-07-24 17:30:51.395761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.408 [2024-07-24 17:30:51.395775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:05.408 [2024-07-24 17:30:51.395802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.408 [2024-07-24 17:30:51.395811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.408 [2024-07-24 17:30:51.395881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.408 [2024-07-24 17:30:51.395899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:05.408 [2024-07-24 17:30:51.395910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.408 [2024-07-24 17:30:51.395926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.408 [2024-07-24 17:30:51.395946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.408 [2024-07-24 17:30:51.395958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:05.408 [2024-07-24 17:30:51.395969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.408 [2024-07-24 17:30:51.395979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.408 [2024-07-24 17:30:51.481012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.408 [2024-07-24 17:30:51.481061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:05.408 [2024-07-24 17:30:51.481093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.408 [2024-07-24 17:30:51.481109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.408 [2024-07-24 17:30:51.556093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.408 [2024-07-24 17:30:51.556139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:05.408 [2024-07-24 17:30:51.556155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.408 [2024-07-24 17:30:51.556165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.408 [2024-07-24 17:30:51.556248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.408 [2024-07-24 17:30:51.556263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:05.408 [2024-07-24 17:30:51.556274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.408 [2024-07-24 17:30:51.556283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.408 [2024-07-24 17:30:51.556332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.408 [2024-07-24 17:30:51.556346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:05.408 [2024-07-24 17:30:51.556357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.408 [2024-07-24 17:30:51.556365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.408 [2024-07-24 17:30:51.556467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.408 [2024-07-24 17:30:51.556484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:05.408 [2024-07-24 17:30:51.556495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.408 [2024-07-24 17:30:51.556504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.408 [2024-07-24 17:30:51.556545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.408 [2024-07-24 17:30:51.556566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:05.408 [2024-07-24 17:30:51.556576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.408 [2024-07-24 17:30:51.556586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.408 [2024-07-24 17:30:51.556628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.408 [2024-07-24 17:30:51.556640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:05.408 [2024-07-24 17:30:51.556703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.408 [2024-07-24 17:30:51.556715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.408 [2024-07-24 17:30:51.556772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.408 [2024-07-24 17:30:51.556804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:05.408 [2024-07-24 17:30:51.556816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.408 [2024-07-24 17:30:51.556826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.408 [2024-07-24 17:30:51.556966] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 497.577 ms, result 0 00:30:06.382 00:30:06.382 00:30:06.382 17:30:52 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:08.281 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:08.281 17:30:54 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:08.281 17:30:54 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:30:08.281 17:30:54 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:08.539 17:30:54 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:08.539 17:30:54 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:08.539 Process with pid 80133 is not found 00:30:08.539 17:30:54 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 80133 00:30:08.539 17:30:54 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 80133 ']' 00:30:08.539 17:30:54 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 80133 00:30:08.539 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (80133) - No such process 00:30:08.539 17:30:54 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 80133 is not found' 00:30:08.539 17:30:54 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:30:08.539 Remove shared memory files 00:30:08.539 17:30:54 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:08.539 17:30:54 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:30:08.539 17:30:54 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:30:08.539 17:30:54 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:30:08.539 17:30:54 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:08.539 17:30:54 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:30:08.539 ************************************ 00:30:08.539 END TEST ftl_restore 00:30:08.539 ************************************ 00:30:08.539 00:30:08.539 real 3m30.238s 00:30:08.539 user 3m16.822s 00:30:08.539 sys 0m14.812s 00:30:08.539 17:30:54 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:08.539 17:30:54 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:30:08.539 17:30:54 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:30:08.539 17:30:54 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:08.539 17:30:54 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:08.539 17:30:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:08.539 ************************************ 00:30:08.539 START TEST ftl_dirty_shutdown 00:30:08.539 ************************************ 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:30:08.539 * Looking for test storage... 00:30:08.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=82334 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 82334 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 82334 ']' 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:08.539 17:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:08.797 [2024-07-24 17:30:54.848540] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:30:08.797 [2024-07-24 17:30:54.848735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82334 ] 00:30:08.797 [2024-07-24 17:30:55.024597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.362 [2024-07-24 17:30:55.296536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.934 17:30:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:09.934 17:30:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:30:09.934 17:30:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:09.934 17:30:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:30:09.934 17:30:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:09.934 17:30:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:30:09.934 17:30:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:09.934 17:30:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:10.198 17:30:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:10.198 17:30:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:10.198 17:30:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:10.198 17:30:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:30:10.198 17:30:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:10.198 17:30:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:30:10.198 17:30:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:30:10.198 17:30:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:10.456 17:30:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:10.456 { 00:30:10.456 "name": "nvme0n1", 00:30:10.456 "aliases": [ 00:30:10.456 "185a108e-35c0-45ec-bae1-bd44ae176ce9" 00:30:10.456 ], 00:30:10.456 "product_name": "NVMe disk", 00:30:10.456 "block_size": 4096, 00:30:10.456 "num_blocks": 1310720, 00:30:10.456 "uuid": "185a108e-35c0-45ec-bae1-bd44ae176ce9", 00:30:10.456 "assigned_rate_limits": { 00:30:10.456 "rw_ios_per_sec": 0, 00:30:10.456 "rw_mbytes_per_sec": 0, 00:30:10.456 "r_mbytes_per_sec": 0, 00:30:10.456 "w_mbytes_per_sec": 0 00:30:10.456 }, 00:30:10.456 "claimed": true, 00:30:10.456 "claim_type": "read_many_write_one", 00:30:10.456 "zoned": false, 00:30:10.456 "supported_io_types": { 00:30:10.456 "read": true, 00:30:10.456 "write": true, 00:30:10.456 "unmap": true, 00:30:10.456 "flush": true, 00:30:10.456 "reset": true, 00:30:10.456 "nvme_admin": true, 00:30:10.456 "nvme_io": true, 00:30:10.456 "nvme_io_md": false, 00:30:10.456 "write_zeroes": true, 00:30:10.456 "zcopy": false, 00:30:10.456 "get_zone_info": false, 00:30:10.456 "zone_management": false, 00:30:10.456 "zone_append": false, 00:30:10.456 "compare": true, 00:30:10.456 "compare_and_write": false, 00:30:10.456 "abort": true, 00:30:10.456 "seek_hole": false, 00:30:10.456 "seek_data": false, 00:30:10.456 "copy": true, 00:30:10.456 "nvme_iov_md": false 00:30:10.456 }, 00:30:10.456 "driver_specific": { 00:30:10.456 "nvme": [ 00:30:10.456 { 00:30:10.456 "pci_address": "0000:00:11.0", 00:30:10.456 "trid": { 00:30:10.456 "trtype": "PCIe", 00:30:10.456 "traddr": "0000:00:11.0" 00:30:10.456 }, 00:30:10.456 "ctrlr_data": { 00:30:10.456 "cntlid": 0, 00:30:10.456 "vendor_id": "0x1b36", 00:30:10.456 "model_number": "QEMU NVMe Ctrl", 00:30:10.456 "serial_number": "12341", 00:30:10.456 "firmware_revision": "8.0.0", 00:30:10.456 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:10.456 "oacs": { 00:30:10.456 "security": 0, 00:30:10.456 "format": 1, 00:30:10.456 "firmware": 0, 00:30:10.456 "ns_manage": 1 00:30:10.456 }, 00:30:10.456 "multi_ctrlr": false, 00:30:10.456 "ana_reporting": false 00:30:10.456 }, 00:30:10.456 "vs": { 00:30:10.456 "nvme_version": "1.4" 00:30:10.456 }, 00:30:10.456 "ns_data": { 00:30:10.456 "id": 1, 00:30:10.456 "can_share": false 00:30:10.456 } 00:30:10.456 } 00:30:10.456 ], 00:30:10.456 "mp_policy": "active_passive" 00:30:10.456 } 00:30:10.456 } 00:30:10.456 ]' 00:30:10.456 17:30:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:10.456 17:30:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:30:10.456 17:30:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:10.714 17:30:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:30:10.714 17:30:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:30:10.714 17:30:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:30:10.714 17:30:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:10.714 17:30:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:10.714 17:30:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:10.714 17:30:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:10.714 17:30:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:10.972 17:30:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=fa41016a-79b3-4260-abe4-f968dfb83ea3 00:30:10.972 17:30:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:10.972 17:30:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fa41016a-79b3-4260-abe4-f968dfb83ea3 00:30:11.230 17:30:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:11.489 17:30:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=70efb99f-9b74-4e6f-8449-03fe53a45c5e 00:30:11.489 17:30:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 70efb99f-9b74-4e6f-8449-03fe53a45c5e 00:30:11.747 17:30:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=5263f568-188e-4ab4-be0e-a609b6adbb9d 00:30:11.748 17:30:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:30:11.748 17:30:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5263f568-188e-4ab4-be0e-a609b6adbb9d 00:30:11.748 17:30:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:30:11.748 17:30:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:11.748 17:30:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=5263f568-188e-4ab4-be0e-a609b6adbb9d 00:30:11.748 17:30:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:30:11.748 17:30:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5263f568-188e-4ab4-be0e-a609b6adbb9d 00:30:11.748 17:30:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=5263f568-188e-4ab4-be0e-a609b6adbb9d 00:30:11.748 17:30:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:11.748 17:30:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:30:11.748 17:30:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:30:11.748 17:30:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5263f568-188e-4ab4-be0e-a609b6adbb9d 00:30:12.006 17:30:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:12.006 { 00:30:12.006 "name": "5263f568-188e-4ab4-be0e-a609b6adbb9d", 00:30:12.006 "aliases": [ 00:30:12.006 "lvs/nvme0n1p0" 00:30:12.006 ], 00:30:12.006 "product_name": "Logical Volume", 00:30:12.006 "block_size": 4096, 00:30:12.006 "num_blocks": 26476544, 00:30:12.006 "uuid": "5263f568-188e-4ab4-be0e-a609b6adbb9d", 00:30:12.006 "assigned_rate_limits": { 00:30:12.006 "rw_ios_per_sec": 0, 00:30:12.006 "rw_mbytes_per_sec": 0, 00:30:12.006 "r_mbytes_per_sec": 0, 00:30:12.006 "w_mbytes_per_sec": 0 00:30:12.006 }, 00:30:12.006 "claimed": false, 00:30:12.006 "zoned": false, 00:30:12.006 "supported_io_types": { 00:30:12.006 "read": true, 00:30:12.006 "write": true, 00:30:12.006 "unmap": true, 00:30:12.006 "flush": false, 00:30:12.006 "reset": true, 00:30:12.006 "nvme_admin": false, 00:30:12.006 "nvme_io": false, 00:30:12.006 "nvme_io_md": false, 00:30:12.006 "write_zeroes": true, 00:30:12.006 "zcopy": false, 00:30:12.006 "get_zone_info": false, 00:30:12.006 "zone_management": false, 00:30:12.006 "zone_append": false, 00:30:12.006 "compare": false, 00:30:12.006 "compare_and_write": false, 00:30:12.006 "abort": false, 00:30:12.006 "seek_hole": true, 00:30:12.006 "seek_data": true, 00:30:12.006 "copy": false, 00:30:12.006 "nvme_iov_md": false 00:30:12.006 }, 00:30:12.006 "driver_specific": { 00:30:12.006 "lvol": { 00:30:12.006 "lvol_store_uuid": "70efb99f-9b74-4e6f-8449-03fe53a45c5e", 00:30:12.006 "base_bdev": "nvme0n1", 00:30:12.006 "thin_provision": true, 00:30:12.006 "num_allocated_clusters": 0, 00:30:12.006 "snapshot": false, 00:30:12.006 "clone": false, 00:30:12.006 "esnap_clone": false 00:30:12.006 } 00:30:12.006 } 00:30:12.006 } 00:30:12.006 ]' 00:30:12.006 17:30:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:12.006 17:30:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:30:12.006 17:30:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:12.006 17:30:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:12.006 17:30:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:12.006 17:30:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:30:12.006 17:30:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:30:12.006 17:30:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:12.006 17:30:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:12.265 17:30:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:12.265 17:30:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:12.265 17:30:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 5263f568-188e-4ab4-be0e-a609b6adbb9d 00:30:12.265 17:30:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=5263f568-188e-4ab4-be0e-a609b6adbb9d 00:30:12.265 17:30:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:12.265 17:30:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:30:12.265 17:30:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:30:12.265 17:30:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5263f568-188e-4ab4-be0e-a609b6adbb9d 00:30:12.523 17:30:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:12.523 { 00:30:12.523 "name": "5263f568-188e-4ab4-be0e-a609b6adbb9d", 00:30:12.523 "aliases": [ 00:30:12.523 "lvs/nvme0n1p0" 00:30:12.523 ], 00:30:12.523 "product_name": "Logical Volume", 00:30:12.523 "block_size": 4096, 00:30:12.523 "num_blocks": 26476544, 00:30:12.523 "uuid": "5263f568-188e-4ab4-be0e-a609b6adbb9d", 00:30:12.523 "assigned_rate_limits": { 00:30:12.523 "rw_ios_per_sec": 0, 00:30:12.523 "rw_mbytes_per_sec": 0, 00:30:12.523 "r_mbytes_per_sec": 0, 00:30:12.523 "w_mbytes_per_sec": 0 00:30:12.523 }, 00:30:12.523 "claimed": false, 00:30:12.523 "zoned": false, 00:30:12.524 "supported_io_types": { 00:30:12.524 "read": true, 00:30:12.524 "write": true, 00:30:12.524 "unmap": true, 00:30:12.524 "flush": false, 00:30:12.524 "reset": true, 00:30:12.524 "nvme_admin": false, 00:30:12.524 "nvme_io": false, 00:30:12.524 "nvme_io_md": false, 00:30:12.524 "write_zeroes": true, 00:30:12.524 "zcopy": false, 00:30:12.524 "get_zone_info": false, 00:30:12.524 "zone_management": false, 00:30:12.524 "zone_append": false, 00:30:12.524 "compare": false, 00:30:12.524 "compare_and_write": false, 00:30:12.524 "abort": false, 00:30:12.524 "seek_hole": true, 00:30:12.524 "seek_data": true, 00:30:12.524 "copy": false, 00:30:12.524 "nvme_iov_md": false 00:30:12.524 }, 00:30:12.524 "driver_specific": { 00:30:12.524 "lvol": { 00:30:12.524 "lvol_store_uuid": "70efb99f-9b74-4e6f-8449-03fe53a45c5e", 00:30:12.524 "base_bdev": "nvme0n1", 00:30:12.524 "thin_provision": true, 00:30:12.524 "num_allocated_clusters": 0, 00:30:12.524 "snapshot": false, 00:30:12.524 "clone": false, 00:30:12.524 "esnap_clone": false 00:30:12.524 } 00:30:12.524 } 00:30:12.524 } 00:30:12.524 ]' 00:30:12.524 17:30:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:12.524 17:30:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:30:12.524 17:30:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:12.524 17:30:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:12.524 17:30:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:12.524 17:30:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:30:12.524 17:30:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:30:12.524 17:30:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:12.782 17:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:30:13.041 17:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 5263f568-188e-4ab4-be0e-a609b6adbb9d 00:30:13.041 17:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=5263f568-188e-4ab4-be0e-a609b6adbb9d 00:30:13.041 17:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:13.041 17:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:30:13.041 17:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:30:13.041 17:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5263f568-188e-4ab4-be0e-a609b6adbb9d 00:30:13.300 17:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:13.300 { 00:30:13.300 "name": "5263f568-188e-4ab4-be0e-a609b6adbb9d", 00:30:13.300 "aliases": [ 00:30:13.300 "lvs/nvme0n1p0" 00:30:13.300 ], 00:30:13.300 "product_name": "Logical Volume", 00:30:13.300 "block_size": 4096, 00:30:13.300 "num_blocks": 26476544, 00:30:13.300 "uuid": "5263f568-188e-4ab4-be0e-a609b6adbb9d", 00:30:13.300 "assigned_rate_limits": { 00:30:13.300 "rw_ios_per_sec": 0, 00:30:13.300 "rw_mbytes_per_sec": 0, 00:30:13.300 "r_mbytes_per_sec": 0, 00:30:13.300 "w_mbytes_per_sec": 0 00:30:13.300 }, 00:30:13.300 "claimed": false, 00:30:13.300 "zoned": false, 00:30:13.300 "supported_io_types": { 00:30:13.300 "read": true, 00:30:13.300 "write": true, 00:30:13.300 "unmap": true, 00:30:13.300 "flush": false, 00:30:13.300 "reset": true, 00:30:13.300 "nvme_admin": false, 00:30:13.300 "nvme_io": false, 00:30:13.300 "nvme_io_md": false, 00:30:13.300 "write_zeroes": true, 00:30:13.300 "zcopy": false, 00:30:13.300 "get_zone_info": false, 00:30:13.300 "zone_management": false, 00:30:13.300 "zone_append": false, 00:30:13.300 "compare": false, 00:30:13.300 "compare_and_write": false, 00:30:13.300 "abort": false, 00:30:13.300 "seek_hole": true, 00:30:13.300 "seek_data": true, 00:30:13.300 "copy": false, 00:30:13.300 "nvme_iov_md": false 00:30:13.300 }, 00:30:13.300 "driver_specific": { 00:30:13.300 "lvol": { 00:30:13.300 "lvol_store_uuid": "70efb99f-9b74-4e6f-8449-03fe53a45c5e", 00:30:13.300 "base_bdev": "nvme0n1", 00:30:13.300 "thin_provision": true, 00:30:13.300 "num_allocated_clusters": 0, 00:30:13.300 "snapshot": false, 00:30:13.300 "clone": false, 00:30:13.300 "esnap_clone": false 00:30:13.300 } 00:30:13.300 } 00:30:13.300 } 00:30:13.300 ]' 00:30:13.300 17:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:13.300 17:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:30:13.300 17:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:13.300 17:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:13.300 17:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:13.300 17:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:30:13.300 17:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:30:13.300 17:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 5263f568-188e-4ab4-be0e-a609b6adbb9d --l2p_dram_limit 10' 00:30:13.300 17:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:30:13.300 17:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:30:13.300 17:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:30:13.300 17:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5263f568-188e-4ab4-be0e-a609b6adbb9d --l2p_dram_limit 10 -c nvc0n1p0 00:30:13.560 [2024-07-24 17:30:59.601254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.560 [2024-07-24 17:30:59.601338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:13.560 [2024-07-24 17:30:59.601360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:13.560 [2024-07-24 17:30:59.601374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-24 17:30:59.601451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.560 [2024-07-24 17:30:59.601469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:13.560 [2024-07-24 17:30:59.601482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:30:13.560 [2024-07-24 17:30:59.601495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-24 17:30:59.601523] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:13.560 [2024-07-24 17:30:59.602536] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:13.560 [2024-07-24 17:30:59.602575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.560 [2024-07-24 17:30:59.602596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:13.560 [2024-07-24 17:30:59.602608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:30:13.560 [2024-07-24 17:30:59.602636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-24 17:30:59.602772] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f6835e3a-4cec-4e40-b073-a8ea29e11d28 00:30:13.560 [2024-07-24 17:30:59.604781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.560 [2024-07-24 17:30:59.604817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:13.560 [2024-07-24 17:30:59.604837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:30:13.560 [2024-07-24 17:30:59.604854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-24 17:30:59.614617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.560 [2024-07-24 17:30:59.614703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:13.560 [2024-07-24 17:30:59.614722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.704 ms 00:30:13.560 [2024-07-24 17:30:59.614733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-24 17:30:59.614845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.560 [2024-07-24 17:30:59.614878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:13.560 [2024-07-24 17:30:59.614909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:30:13.560 [2024-07-24 17:30:59.614936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.560 [2024-07-24 17:30:59.615069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.561 [2024-07-24 17:30:59.615094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:13.561 [2024-07-24 17:30:59.615113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:13.561 [2024-07-24 17:30:59.615124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.561 [2024-07-24 17:30:59.615157] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:13.561 [2024-07-24 17:30:59.619924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.561 [2024-07-24 17:30:59.619966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:13.561 [2024-07-24 17:30:59.619981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.777 ms 00:30:13.561 [2024-07-24 17:30:59.619994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.561 [2024-07-24 17:30:59.620036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.561 [2024-07-24 17:30:59.620053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:13.561 [2024-07-24 17:30:59.620064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:13.561 [2024-07-24 17:30:59.620076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.561 [2024-07-24 17:30:59.620123] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:13.561 [2024-07-24 17:30:59.620271] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:13.561 [2024-07-24 17:30:59.620287] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:13.561 [2024-07-24 17:30:59.620307] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:30:13.561 [2024-07-24 17:30:59.620321] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:13.561 [2024-07-24 17:30:59.620336] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:13.561 [2024-07-24 17:30:59.620348] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:13.561 [2024-07-24 17:30:59.620365] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:13.561 [2024-07-24 17:30:59.620376] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:13.561 [2024-07-24 17:30:59.620388] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:13.561 [2024-07-24 17:30:59.620399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.561 [2024-07-24 17:30:59.620411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:13.561 [2024-07-24 17:30:59.620422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:30:13.561 [2024-07-24 17:30:59.620434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.561 [2024-07-24 17:30:59.620513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.561 [2024-07-24 17:30:59.620530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:13.561 [2024-07-24 17:30:59.620540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:13.561 [2024-07-24 17:30:59.620555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.561 [2024-07-24 17:30:59.620683] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:13.561 [2024-07-24 17:30:59.620706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:13.561 [2024-07-24 17:30:59.620730] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:13.561 [2024-07-24 17:30:59.620744] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.561 [2024-07-24 17:30:59.620755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:13.561 [2024-07-24 17:30:59.620767] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:13.561 [2024-07-24 17:30:59.620793] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:13.561 [2024-07-24 17:30:59.620805] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:13.561 [2024-07-24 17:30:59.620816] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:13.561 [2024-07-24 17:30:59.620828] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:13.561 [2024-07-24 17:30:59.620838] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:13.561 [2024-07-24 17:30:59.620853] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:13.561 [2024-07-24 17:30:59.620863] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:13.561 [2024-07-24 17:30:59.620876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:13.561 [2024-07-24 17:30:59.620886] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:13.561 [2024-07-24 17:30:59.620898] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.561 [2024-07-24 17:30:59.620907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:13.561 [2024-07-24 17:30:59.620922] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:13.561 [2024-07-24 17:30:59.620932] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.561 [2024-07-24 17:30:59.620945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:13.561 [2024-07-24 17:30:59.620954] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:13.561 [2024-07-24 17:30:59.620966] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:13.561 [2024-07-24 17:30:59.620975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:13.561 [2024-07-24 17:30:59.620987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:13.561 [2024-07-24 17:30:59.620997] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:13.561 [2024-07-24 17:30:59.621008] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:13.561 [2024-07-24 17:30:59.621018] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:13.561 [2024-07-24 17:30:59.621030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:13.561 [2024-07-24 17:30:59.621039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:13.561 [2024-07-24 17:30:59.621082] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:13.561 [2024-07-24 17:30:59.621107] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:13.561 [2024-07-24 17:30:59.621120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:13.561 [2024-07-24 17:30:59.621131] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:13.561 [2024-07-24 17:30:59.621161] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:13.561 [2024-07-24 17:30:59.621172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:13.561 [2024-07-24 17:30:59.621184] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:13.561 [2024-07-24 17:30:59.621195] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:13.561 [2024-07-24 17:30:59.621209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:13.561 [2024-07-24 17:30:59.621220] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:13.561 [2024-07-24 17:30:59.621232] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.561 [2024-07-24 17:30:59.621243] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:13.561 [2024-07-24 17:30:59.621256] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:13.561 [2024-07-24 17:30:59.621266] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.561 [2024-07-24 17:30:59.621278] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:13.561 [2024-07-24 17:30:59.621290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:13.561 [2024-07-24 17:30:59.621303] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:13.561 [2024-07-24 17:30:59.621314] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.561 [2024-07-24 17:30:59.621328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:13.561 [2024-07-24 17:30:59.621339] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:13.561 [2024-07-24 17:30:59.621354] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:13.561 [2024-07-24 17:30:59.621365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:13.561 [2024-07-24 17:30:59.621378] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:13.561 [2024-07-24 17:30:59.621388] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:13.561 [2024-07-24 17:30:59.621406] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:13.561 [2024-07-24 17:30:59.621423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:13.561 [2024-07-24 17:30:59.621438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:13.561 [2024-07-24 17:30:59.621449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:13.561 [2024-07-24 17:30:59.621463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:13.561 [2024-07-24 17:30:59.621474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:13.561 [2024-07-24 17:30:59.621487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:13.561 [2024-07-24 17:30:59.621498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:13.561 [2024-07-24 17:30:59.621513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:13.561 [2024-07-24 17:30:59.621524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:13.561 [2024-07-24 17:30:59.621540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:13.562 [2024-07-24 17:30:59.621551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:13.562 [2024-07-24 17:30:59.621568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:13.562 [2024-07-24 17:30:59.621580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:13.562 [2024-07-24 17:30:59.621594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:13.562 [2024-07-24 17:30:59.621605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:13.562 [2024-07-24 17:30:59.621618] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:13.562 [2024-07-24 17:30:59.621631] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:13.562 [2024-07-24 17:30:59.621646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:13.562 [2024-07-24 17:30:59.621657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:13.562 [2024-07-24 17:30:59.621670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:13.562 [2024-07-24 17:30:59.621682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:13.562 [2024-07-24 17:30:59.621696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.562 [2024-07-24 17:30:59.621707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:13.562 [2024-07-24 17:30:59.621721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.098 ms 00:30:13.562 [2024-07-24 17:30:59.621732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.562 [2024-07-24 17:30:59.621805] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:13.562 [2024-07-24 17:30:59.621821] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:17.751 [2024-07-24 17:31:03.589901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.751 [2024-07-24 17:31:03.589975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:17.751 [2024-07-24 17:31:03.590015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3968.105 ms 00:30:17.751 [2024-07-24 17:31:03.590028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.751 [2024-07-24 17:31:03.624296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.751 [2024-07-24 17:31:03.624350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:17.751 [2024-07-24 17:31:03.624388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.023 ms 00:30:17.751 [2024-07-24 17:31:03.624400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.751 [2024-07-24 17:31:03.624560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.751 [2024-07-24 17:31:03.624578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:17.751 [2024-07-24 17:31:03.624597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:17.751 [2024-07-24 17:31:03.624608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.751 [2024-07-24 17:31:03.661166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.751 [2024-07-24 17:31:03.661211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:17.751 [2024-07-24 17:31:03.661246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.449 ms 00:30:17.751 [2024-07-24 17:31:03.661262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.751 [2024-07-24 17:31:03.661308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.751 [2024-07-24 17:31:03.661322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:17.751 [2024-07-24 17:31:03.661342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:17.751 [2024-07-24 17:31:03.661353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.751 [2024-07-24 17:31:03.661996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.751 [2024-07-24 17:31:03.662045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:17.751 [2024-07-24 17:31:03.662061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:30:17.751 [2024-07-24 17:31:03.662072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.751 [2024-07-24 17:31:03.662228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.751 [2024-07-24 17:31:03.662253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:17.751 [2024-07-24 17:31:03.662268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:30:17.751 [2024-07-24 17:31:03.662278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.751 [2024-07-24 17:31:03.680662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.751 [2024-07-24 17:31:03.680699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:17.751 [2024-07-24 17:31:03.680733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.353 ms 00:30:17.751 [2024-07-24 17:31:03.680744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.751 [2024-07-24 17:31:03.692937] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:17.751 [2024-07-24 17:31:03.696945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.751 [2024-07-24 17:31:03.696996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:17.751 [2024-07-24 17:31:03.697011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.117 ms 00:30:17.751 [2024-07-24 17:31:03.697025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.751 [2024-07-24 17:31:03.830172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.751 [2024-07-24 17:31:03.830255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:17.751 [2024-07-24 17:31:03.830277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 133.115 ms 00:30:17.751 [2024-07-24 17:31:03.830291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.751 [2024-07-24 17:31:03.830504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.751 [2024-07-24 17:31:03.830526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:17.751 [2024-07-24 17:31:03.830539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:30:17.751 [2024-07-24 17:31:03.830555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.751 [2024-07-24 17:31:03.857739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.751 [2024-07-24 17:31:03.857815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:17.751 [2024-07-24 17:31:03.857835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.097 ms 00:30:17.751 [2024-07-24 17:31:03.857855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.751 [2024-07-24 17:31:03.886859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.751 [2024-07-24 17:31:03.886923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:17.751 [2024-07-24 17:31:03.886942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.954 ms 00:30:17.751 [2024-07-24 17:31:03.886971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.751 [2024-07-24 17:31:03.887953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.751 [2024-07-24 17:31:03.887992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:17.751 [2024-07-24 17:31:03.888011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.916 ms 00:30:17.751 [2024-07-24 17:31:03.888025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.751 [2024-07-24 17:31:03.976870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.751 [2024-07-24 17:31:03.976947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:17.751 [2024-07-24 17:31:03.976968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.779 ms 00:30:17.751 [2024-07-24 17:31:03.976987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.009 [2024-07-24 17:31:04.005790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.009 [2024-07-24 17:31:04.005835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:18.009 [2024-07-24 17:31:04.005852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.754 ms 00:30:18.009 [2024-07-24 17:31:04.005865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.009 [2024-07-24 17:31:04.032579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.009 [2024-07-24 17:31:04.032638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:18.009 [2024-07-24 17:31:04.032654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.669 ms 00:30:18.009 [2024-07-24 17:31:04.032679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.009 [2024-07-24 17:31:04.059955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.009 [2024-07-24 17:31:04.060017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:18.009 [2024-07-24 17:31:04.060033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.231 ms 00:30:18.009 [2024-07-24 17:31:04.060046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.009 [2024-07-24 17:31:04.060094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.009 [2024-07-24 17:31:04.060115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:18.009 [2024-07-24 17:31:04.060127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:18.009 [2024-07-24 17:31:04.060143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.009 [2024-07-24 17:31:04.060255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.009 [2024-07-24 17:31:04.060280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:18.009 [2024-07-24 17:31:04.060292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:30:18.009 [2024-07-24 17:31:04.060304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.009 [2024-07-24 17:31:04.061991] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4459.979 ms, result 0 00:30:18.009 { 00:30:18.009 "name": "ftl0", 00:30:18.009 "uuid": "f6835e3a-4cec-4e40-b073-a8ea29e11d28" 00:30:18.009 } 00:30:18.009 17:31:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:30:18.009 17:31:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:18.267 17:31:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:30:18.267 17:31:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:30:18.267 17:31:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:30:18.524 /dev/nbd0 00:30:18.524 17:31:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:30:18.524 17:31:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:30:18.524 17:31:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:30:18.524 17:31:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:18.524 17:31:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:18.524 17:31:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:30:18.524 17:31:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:30:18.524 17:31:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:18.524 17:31:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:18.524 17:31:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:30:18.524 1+0 records in 00:30:18.524 1+0 records out 00:30:18.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262548 s, 15.6 MB/s 00:30:18.524 17:31:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:30:18.524 17:31:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:30:18.524 17:31:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:30:18.524 17:31:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:18.524 17:31:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:30:18.524 17:31:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:30:18.782 [2024-07-24 17:31:04.782688] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:30:18.782 [2024-07-24 17:31:04.783198] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82488 ] 00:30:18.782 [2024-07-24 17:31:04.958320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.040 [2024-07-24 17:31:05.223223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.177  Copying: 186/1024 [MB] (186 MBps) Copying: 368/1024 [MB] (181 MBps) Copying: 553/1024 [MB] (184 MBps) Copying: 732/1024 [MB] (179 MBps) Copying: 901/1024 [MB] (168 MBps) Copying: 1024/1024 [MB] (average 180 MBps) 00:30:26.177 00:30:26.177 17:31:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:28.075 17:31:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:30:28.075 [2024-07-24 17:31:14.233591] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:30:28.075 [2024-07-24 17:31:14.233804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82591 ] 00:30:28.333 [2024-07-24 17:31:14.408543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.592 [2024-07-24 17:31:14.648371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:46.075  Copying: 14/1024 [MB] (14 MBps) Copying: 29/1024 [MB] (14 MBps) Copying: 41/1024 [MB] (12 MBps) Copying: 53/1024 [MB] (12 MBps) Copying: 66/1024 [MB] (12 MBps) Copying: 79/1024 [MB] (12 MBps) Copying: 91/1024 [MB] (12 MBps) Copying: 105/1024 [MB] (13 MBps) Copying: 118/1024 [MB] (13 MBps) Copying: 131/1024 [MB] (13 MBps) Copying: 145/1024 [MB] (13 MBps) Copying: 159/1024 [MB] (13 MBps) Copying: 173/1024 [MB] (13 MBps) Copying: 187/1024 [MB] (13 MBps) Copying: 201/1024 [MB] (14 MBps) Copying: 215/1024 [MB] (13 MBps) Copying: 228/1024 [MB] (13 MBps) Copying: 242/1024 [MB] (13 MBps) Copying: 256/1024 [MB] (14 MBps) Copying: 270/1024 [MB] (13 MBps) Copying: 284/1024 [MB] (13 MBps) Copying: 298/1024 [MB] (13 MBps) Copying: 312/1024 [MB] (13 MBps) Copying: 325/1024 [MB] (13 MBps) Copying: 339/1024 [MB] (13 MBps) Copying: 352/1024 [MB] (13 MBps) Copying: 366/1024 [MB] (13 MBps) Copying: 380/1024 [MB] (13 MBps) Copying: 394/1024 [MB] (13 MBps) Copying: 407/1024 [MB] (13 MBps) Copying: 421/1024 [MB] (13 MBps) Copying: 434/1024 [MB] (13 MBps) Copying: 448/1024 [MB] (13 MBps) Copying: 461/1024 [MB] (13 MBps) Copying: 474/1024 [MB] (12 MBps) Copying: 487/1024 [MB] (13 MBps) Copying: 500/1024 [MB] (12 MBps) Copying: 512/1024 [MB] (12 MBps) Copying: 524/1024 [MB] (11 MBps) Copying: 536/1024 [MB] (12 MBps) Copying: 549/1024 [MB] (12 MBps) Copying: 561/1024 [MB] (12 MBps) Copying: 574/1024 [MB] (12 MBps) Copying: 587/1024 [MB] (13 MBps) Copying: 601/1024 [MB] (13 MBps) Copying: 615/1024 [MB] (13 MBps) Copying: 628/1024 [MB] (12 MBps) Copying: 641/1024 [MB] (13 MBps) Copying: 655/1024 [MB] (13 MBps) Copying: 669/1024 [MB] (13 MBps) Copying: 682/1024 [MB] (13 MBps) Copying: 695/1024 [MB] (13 MBps) Copying: 708/1024 [MB] (12 MBps) Copying: 721/1024 [MB] (13 MBps) Copying: 735/1024 [MB] (13 MBps) Copying: 749/1024 [MB] (14 MBps) Copying: 763/1024 [MB] (14 MBps) Copying: 777/1024 [MB] (14 MBps) Copying: 791/1024 [MB] (13 MBps) Copying: 805/1024 [MB] (14 MBps) Copying: 820/1024 [MB] (14 MBps) Copying: 834/1024 [MB] (14 MBps) Copying: 847/1024 [MB] (13 MBps) Copying: 860/1024 [MB] (13 MBps) Copying: 874/1024 [MB] (13 MBps) Copying: 888/1024 [MB] (13 MBps) Copying: 902/1024 [MB] (13 MBps) Copying: 915/1024 [MB] (13 MBps) Copying: 929/1024 [MB] (13 MBps) Copying: 943/1024 [MB] (14 MBps) Copying: 956/1024 [MB] (13 MBps) Copying: 968/1024 [MB] (12 MBps) Copying: 983/1024 [MB] (14 MBps) Copying: 997/1024 [MB] (14 MBps) Copying: 1012/1024 [MB] (14 MBps) Copying: 1024/1024 [MB] (average 13 MBps) 00:31:46.075 00:31:46.075 17:32:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:31:46.075 17:32:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:31:46.075 17:32:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:46.681 [2024-07-24 17:32:32.577274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.681 [2024-07-24 17:32:32.577357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:46.681 [2024-07-24 17:32:32.577400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:46.681 [2024-07-24 17:32:32.577421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.681 [2024-07-24 17:32:32.577466] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:46.681 [2024-07-24 17:32:32.581244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.681 [2024-07-24 17:32:32.581290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:46.681 [2024-07-24 17:32:32.581309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.752 ms 00:31:46.681 [2024-07-24 17:32:32.581327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.681 [2024-07-24 17:32:32.583288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.681 [2024-07-24 17:32:32.583350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:46.681 [2024-07-24 17:32:32.583369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.922 ms 00:31:46.681 [2024-07-24 17:32:32.583389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.681 [2024-07-24 17:32:32.601066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.681 [2024-07-24 17:32:32.601191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:46.681 [2024-07-24 17:32:32.601216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.631 ms 00:31:46.681 [2024-07-24 17:32:32.601232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.681 [2024-07-24 17:32:32.607950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.681 [2024-07-24 17:32:32.608072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:46.681 [2024-07-24 17:32:32.608099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.632 ms 00:31:46.681 [2024-07-24 17:32:32.608122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.681 [2024-07-24 17:32:32.644263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.681 [2024-07-24 17:32:32.644368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:46.681 [2024-07-24 17:32:32.644392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.865 ms 00:31:46.681 [2024-07-24 17:32:32.644408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.681 [2024-07-24 17:32:32.665436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.681 [2024-07-24 17:32:32.665555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:46.681 [2024-07-24 17:32:32.665580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.912 ms 00:31:46.681 [2024-07-24 17:32:32.665612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.681 [2024-07-24 17:32:32.666014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.681 [2024-07-24 17:32:32.666043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:46.681 [2024-07-24 17:32:32.666060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:31:46.681 [2024-07-24 17:32:32.666078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.681 [2024-07-24 17:32:32.701457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.681 [2024-07-24 17:32:32.701564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:31:46.681 [2024-07-24 17:32:32.701588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.345 ms 00:31:46.681 [2024-07-24 17:32:32.701605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.681 [2024-07-24 17:32:32.736079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.681 [2024-07-24 17:32:32.736188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:31:46.681 [2024-07-24 17:32:32.736211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.336 ms 00:31:46.681 [2024-07-24 17:32:32.736227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.681 [2024-07-24 17:32:32.770810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.681 [2024-07-24 17:32:32.770915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:46.681 [2024-07-24 17:32:32.770938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.472 ms 00:31:46.681 [2024-07-24 17:32:32.770954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.681 [2024-07-24 17:32:32.804752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.681 [2024-07-24 17:32:32.804867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:46.681 [2024-07-24 17:32:32.804890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.523 ms 00:31:46.682 [2024-07-24 17:32:32.804906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.682 [2024-07-24 17:32:32.805018] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:46.682 [2024-07-24 17:32:32.805051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.805991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:46.682 [2024-07-24 17:32:32.806446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:46.683 [2024-07-24 17:32:32.806465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:46.683 [2024-07-24 17:32:32.806478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:46.683 [2024-07-24 17:32:32.806493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:46.683 [2024-07-24 17:32:32.806506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:46.683 [2024-07-24 17:32:32.806521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:46.683 [2024-07-24 17:32:32.806534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:46.683 [2024-07-24 17:32:32.806549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:46.683 [2024-07-24 17:32:32.806562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:46.683 [2024-07-24 17:32:32.806577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:46.683 [2024-07-24 17:32:32.806590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:46.683 [2024-07-24 17:32:32.806605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:46.683 [2024-07-24 17:32:32.806617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:46.683 [2024-07-24 17:32:32.806657] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:46.683 [2024-07-24 17:32:32.806673] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f6835e3a-4cec-4e40-b073-a8ea29e11d28 00:31:46.683 [2024-07-24 17:32:32.806693] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:46.683 [2024-07-24 17:32:32.806710] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:46.683 [2024-07-24 17:32:32.806728] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:46.683 [2024-07-24 17:32:32.806741] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:46.683 [2024-07-24 17:32:32.806755] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:46.683 [2024-07-24 17:32:32.806768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:46.683 [2024-07-24 17:32:32.806781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:46.683 [2024-07-24 17:32:32.806792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:46.683 [2024-07-24 17:32:32.806805] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:46.683 [2024-07-24 17:32:32.806818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.683 [2024-07-24 17:32:32.806833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:46.683 [2024-07-24 17:32:32.806846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.802 ms 00:31:46.683 [2024-07-24 17:32:32.806861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.683 [2024-07-24 17:32:32.825346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.683 [2024-07-24 17:32:32.825428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:46.683 [2024-07-24 17:32:32.825450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.374 ms 00:31:46.683 [2024-07-24 17:32:32.825466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.683 [2024-07-24 17:32:32.825996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.683 [2024-07-24 17:32:32.826030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:46.683 [2024-07-24 17:32:32.826046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:31:46.683 [2024-07-24 17:32:32.826061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.683 [2024-07-24 17:32:32.879852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.683 [2024-07-24 17:32:32.879946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:46.683 [2024-07-24 17:32:32.879969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.683 [2024-07-24 17:32:32.879985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.683 [2024-07-24 17:32:32.880090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.683 [2024-07-24 17:32:32.880110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:46.683 [2024-07-24 17:32:32.880124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.683 [2024-07-24 17:32:32.880139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.683 [2024-07-24 17:32:32.880304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.683 [2024-07-24 17:32:32.880330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:46.683 [2024-07-24 17:32:32.880345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.683 [2024-07-24 17:32:32.880359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.683 [2024-07-24 17:32:32.880388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.683 [2024-07-24 17:32:32.880409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:46.683 [2024-07-24 17:32:32.880422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.683 [2024-07-24 17:32:32.880436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.940 [2024-07-24 17:32:32.988465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.940 [2024-07-24 17:32:32.988579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:46.940 [2024-07-24 17:32:32.988603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.940 [2024-07-24 17:32:32.988620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.940 [2024-07-24 17:32:33.082481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.940 [2024-07-24 17:32:33.082608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:46.940 [2024-07-24 17:32:33.082631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.940 [2024-07-24 17:32:33.082682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.940 [2024-07-24 17:32:33.082867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.940 [2024-07-24 17:32:33.082898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:46.940 [2024-07-24 17:32:33.082912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.940 [2024-07-24 17:32:33.082927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.940 [2024-07-24 17:32:33.083022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.940 [2024-07-24 17:32:33.083052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:46.940 [2024-07-24 17:32:33.083066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.940 [2024-07-24 17:32:33.083082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.940 [2024-07-24 17:32:33.083232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.940 [2024-07-24 17:32:33.083260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:46.940 [2024-07-24 17:32:33.083273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.940 [2024-07-24 17:32:33.083288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.940 [2024-07-24 17:32:33.083341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.940 [2024-07-24 17:32:33.083370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:46.940 [2024-07-24 17:32:33.083385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.940 [2024-07-24 17:32:33.083418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.940 [2024-07-24 17:32:33.083474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.940 [2024-07-24 17:32:33.083493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:46.940 [2024-07-24 17:32:33.083509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.940 [2024-07-24 17:32:33.083523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.940 [2024-07-24 17:32:33.083584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.940 [2024-07-24 17:32:33.083607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:46.940 [2024-07-24 17:32:33.083620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.940 [2024-07-24 17:32:33.083634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.940 [2024-07-24 17:32:33.083888] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 506.551 ms, result 0 00:31:46.940 true 00:31:46.940 17:32:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 82334 00:31:46.940 17:32:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid82334 00:31:46.940 17:32:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:31:47.197 [2024-07-24 17:32:33.210829] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:31:47.197 [2024-07-24 17:32:33.211027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83363 ] 00:31:47.197 [2024-07-24 17:32:33.380062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.459 [2024-07-24 17:32:33.633450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.322  Copying: 163/1024 [MB] (163 MBps) Copying: 326/1024 [MB] (162 MBps) Copying: 487/1024 [MB] (161 MBps) Copying: 654/1024 [MB] (166 MBps) Copying: 821/1024 [MB] (167 MBps) Copying: 1001/1024 [MB] (179 MBps) Copying: 1024/1024 [MB] (average 167 MBps) 00:31:55.322 00:31:55.322 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 82334 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:31:55.322 17:32:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:55.322 [2024-07-24 17:32:41.316094] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:31:55.322 [2024-07-24 17:32:41.316631] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83444 ] 00:31:55.322 [2024-07-24 17:32:41.490216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.639 [2024-07-24 17:32:41.718297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.896 [2024-07-24 17:32:42.046079] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:55.896 [2024-07-24 17:32:42.046424] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:55.896 [2024-07-24 17:32:42.113240] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:55.896 [2024-07-24 17:32:42.113767] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:55.896 [2024-07-24 17:32:42.114123] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:56.462 [2024-07-24 17:32:42.394186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.462 [2024-07-24 17:32:42.394541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:56.462 [2024-07-24 17:32:42.394685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:56.462 [2024-07-24 17:32:42.394709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.462 [2024-07-24 17:32:42.394787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.462 [2024-07-24 17:32:42.394810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:56.462 [2024-07-24 17:32:42.394823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:31:56.462 [2024-07-24 17:32:42.394834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.462 [2024-07-24 17:32:42.394863] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:56.462 [2024-07-24 17:32:42.395840] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:56.462 [2024-07-24 17:32:42.395881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.462 [2024-07-24 17:32:42.395894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:56.462 [2024-07-24 17:32:42.395907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.024 ms 00:31:56.462 [2024-07-24 17:32:42.395918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.462 [2024-07-24 17:32:42.398030] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:56.462 [2024-07-24 17:32:42.413393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.462 [2024-07-24 17:32:42.413434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:56.462 [2024-07-24 17:32:42.413474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.381 ms 00:31:56.462 [2024-07-24 17:32:42.413485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.462 [2024-07-24 17:32:42.413551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.462 [2024-07-24 17:32:42.413569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:56.462 [2024-07-24 17:32:42.413581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:31:56.462 [2024-07-24 17:32:42.413591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.462 [2024-07-24 17:32:42.422794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.462 [2024-07-24 17:32:42.422853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:56.463 [2024-07-24 17:32:42.422886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.048 ms 00:31:56.463 [2024-07-24 17:32:42.422897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.463 [2024-07-24 17:32:42.423018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.463 [2024-07-24 17:32:42.423039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:56.463 [2024-07-24 17:32:42.423051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:31:56.463 [2024-07-24 17:32:42.423062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.463 [2024-07-24 17:32:42.423132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.463 [2024-07-24 17:32:42.423149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:56.463 [2024-07-24 17:32:42.423164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:56.463 [2024-07-24 17:32:42.423175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.463 [2024-07-24 17:32:42.423212] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:56.463 [2024-07-24 17:32:42.427971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.463 [2024-07-24 17:32:42.428009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:56.463 [2024-07-24 17:32:42.428040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.768 ms 00:31:56.463 [2024-07-24 17:32:42.428051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.463 [2024-07-24 17:32:42.428105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.463 [2024-07-24 17:32:42.428121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:56.463 [2024-07-24 17:32:42.428133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:56.463 [2024-07-24 17:32:42.428143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.463 [2024-07-24 17:32:42.428207] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:56.463 [2024-07-24 17:32:42.428240] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:56.463 [2024-07-24 17:32:42.428282] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:56.463 [2024-07-24 17:32:42.428301] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:31:56.463 [2024-07-24 17:32:42.428392] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:56.463 [2024-07-24 17:32:42.428405] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:56.463 [2024-07-24 17:32:42.428420] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:31:56.463 [2024-07-24 17:32:42.428434] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:56.463 [2024-07-24 17:32:42.428446] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:56.463 [2024-07-24 17:32:42.428462] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:56.463 [2024-07-24 17:32:42.428472] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:56.463 [2024-07-24 17:32:42.428482] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:56.463 [2024-07-24 17:32:42.428492] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:56.463 [2024-07-24 17:32:42.428503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.463 [2024-07-24 17:32:42.428514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:56.463 [2024-07-24 17:32:42.428525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:31:56.463 [2024-07-24 17:32:42.428545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.463 [2024-07-24 17:32:42.428625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.463 [2024-07-24 17:32:42.428638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:56.463 [2024-07-24 17:32:42.428665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:56.463 [2024-07-24 17:32:42.428719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.463 [2024-07-24 17:32:42.428838] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:56.463 [2024-07-24 17:32:42.428854] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:56.463 [2024-07-24 17:32:42.428866] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:56.463 [2024-07-24 17:32:42.428877] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:56.463 [2024-07-24 17:32:42.428888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:56.463 [2024-07-24 17:32:42.428897] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:56.463 [2024-07-24 17:32:42.428907] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:56.463 [2024-07-24 17:32:42.428917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:56.463 [2024-07-24 17:32:42.428926] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:56.463 [2024-07-24 17:32:42.428936] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:56.463 [2024-07-24 17:32:42.428946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:56.463 [2024-07-24 17:32:42.428956] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:56.463 [2024-07-24 17:32:42.428965] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:56.463 [2024-07-24 17:32:42.428974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:56.463 [2024-07-24 17:32:42.428986] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:56.463 [2024-07-24 17:32:42.428996] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:56.463 [2024-07-24 17:32:42.429020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:56.463 [2024-07-24 17:32:42.429030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:56.463 [2024-07-24 17:32:42.429040] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:56.463 [2024-07-24 17:32:42.429050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:56.463 [2024-07-24 17:32:42.429061] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:56.463 [2024-07-24 17:32:42.429085] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:56.463 [2024-07-24 17:32:42.429110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:56.463 [2024-07-24 17:32:42.429136] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:56.463 [2024-07-24 17:32:42.429146] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:56.463 [2024-07-24 17:32:42.429172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:56.463 [2024-07-24 17:32:42.429183] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:56.463 [2024-07-24 17:32:42.429193] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:56.463 [2024-07-24 17:32:42.429204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:56.463 [2024-07-24 17:32:42.429215] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:56.463 [2024-07-24 17:32:42.429225] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:56.463 [2024-07-24 17:32:42.429235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:56.463 [2024-07-24 17:32:42.429246] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:56.463 [2024-07-24 17:32:42.429263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:56.463 [2024-07-24 17:32:42.429274] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:56.463 [2024-07-24 17:32:42.429285] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:56.463 [2024-07-24 17:32:42.429295] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:56.463 [2024-07-24 17:32:42.429306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:56.463 [2024-07-24 17:32:42.429317] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:56.463 [2024-07-24 17:32:42.429327] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:56.463 [2024-07-24 17:32:42.429338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:56.463 [2024-07-24 17:32:42.429348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:56.463 [2024-07-24 17:32:42.429359] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:56.463 [2024-07-24 17:32:42.429369] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:56.463 [2024-07-24 17:32:42.429381] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:56.463 [2024-07-24 17:32:42.429392] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:56.463 [2024-07-24 17:32:42.429405] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:56.463 [2024-07-24 17:32:42.429421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:56.463 [2024-07-24 17:32:42.429432] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:56.463 [2024-07-24 17:32:42.429443] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:56.463 [2024-07-24 17:32:42.429454] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:56.463 [2024-07-24 17:32:42.429464] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:56.463 [2024-07-24 17:32:42.429475] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:56.463 [2024-07-24 17:32:42.429487] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:56.463 [2024-07-24 17:32:42.429502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:56.463 [2024-07-24 17:32:42.429515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:56.463 [2024-07-24 17:32:42.429527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:56.463 [2024-07-24 17:32:42.429539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:56.463 [2024-07-24 17:32:42.429551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:56.463 [2024-07-24 17:32:42.429562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:56.463 [2024-07-24 17:32:42.429579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:56.463 [2024-07-24 17:32:42.429597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:56.464 [2024-07-24 17:32:42.429611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:56.464 [2024-07-24 17:32:42.429627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:56.464 [2024-07-24 17:32:42.429645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:56.464 [2024-07-24 17:32:42.429661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:56.464 [2024-07-24 17:32:42.429677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:56.464 [2024-07-24 17:32:42.429696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:56.464 [2024-07-24 17:32:42.429713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:56.464 [2024-07-24 17:32:42.429752] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:56.464 [2024-07-24 17:32:42.429774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:56.464 [2024-07-24 17:32:42.429806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:56.464 [2024-07-24 17:32:42.429818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:56.464 [2024-07-24 17:32:42.429831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:56.464 [2024-07-24 17:32:42.429842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:56.464 [2024-07-24 17:32:42.429855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.464 [2024-07-24 17:32:42.429867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:56.464 [2024-07-24 17:32:42.429879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:31:56.464 [2024-07-24 17:32:42.429891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.464 [2024-07-24 17:32:42.476591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.464 [2024-07-24 17:32:42.476720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:56.464 [2024-07-24 17:32:42.476765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.619 ms 00:31:56.464 [2024-07-24 17:32:42.476779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.464 [2024-07-24 17:32:42.476917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.464 [2024-07-24 17:32:42.476934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:56.464 [2024-07-24 17:32:42.476955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:31:56.464 [2024-07-24 17:32:42.476966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.464 [2024-07-24 17:32:42.521903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.464 [2024-07-24 17:32:42.521987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:56.464 [2024-07-24 17:32:42.522041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.832 ms 00:31:56.464 [2024-07-24 17:32:42.522070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.464 [2024-07-24 17:32:42.522162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.464 [2024-07-24 17:32:42.522188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:56.464 [2024-07-24 17:32:42.522219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:56.464 [2024-07-24 17:32:42.522231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.464 [2024-07-24 17:32:42.523288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.464 [2024-07-24 17:32:42.523488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:56.464 [2024-07-24 17:32:42.523607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:31:56.464 [2024-07-24 17:32:42.523732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.464 [2024-07-24 17:32:42.523968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.464 [2024-07-24 17:32:42.524023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:56.464 [2024-07-24 17:32:42.524128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:31:56.464 [2024-07-24 17:32:42.524260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.464 [2024-07-24 17:32:42.543279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.464 [2024-07-24 17:32:42.543501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:56.464 [2024-07-24 17:32:42.543529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.945 ms 00:31:56.464 [2024-07-24 17:32:42.543542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.464 [2024-07-24 17:32:42.560770] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:56.464 [2024-07-24 17:32:42.560819] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:56.464 [2024-07-24 17:32:42.560855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.464 [2024-07-24 17:32:42.560868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:56.464 [2024-07-24 17:32:42.560898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.105 ms 00:31:56.464 [2024-07-24 17:32:42.560909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.464 [2024-07-24 17:32:42.590849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.464 [2024-07-24 17:32:42.590910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:56.464 [2024-07-24 17:32:42.590931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.886 ms 00:31:56.464 [2024-07-24 17:32:42.590944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.464 [2024-07-24 17:32:42.607274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.464 [2024-07-24 17:32:42.607324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:56.464 [2024-07-24 17:32:42.607343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.257 ms 00:31:56.464 [2024-07-24 17:32:42.607355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.464 [2024-07-24 17:32:42.623025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.464 [2024-07-24 17:32:42.623075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:56.464 [2024-07-24 17:32:42.623094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.619 ms 00:31:56.464 [2024-07-24 17:32:42.623106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.464 [2024-07-24 17:32:42.624076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.464 [2024-07-24 17:32:42.624112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:56.464 [2024-07-24 17:32:42.624129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:31:56.464 [2024-07-24 17:32:42.624142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.722 [2024-07-24 17:32:42.704958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.722 [2024-07-24 17:32:42.705024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:56.722 [2024-07-24 17:32:42.705048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.790 ms 00:31:56.722 [2024-07-24 17:32:42.705062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.722 [2024-07-24 17:32:42.718968] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:56.722 [2024-07-24 17:32:42.723252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.722 [2024-07-24 17:32:42.723299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:56.722 [2024-07-24 17:32:42.723321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.112 ms 00:31:56.722 [2024-07-24 17:32:42.723334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.722 [2024-07-24 17:32:42.723471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.722 [2024-07-24 17:32:42.723497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:56.722 [2024-07-24 17:32:42.723512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:56.722 [2024-07-24 17:32:42.723523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.722 [2024-07-24 17:32:42.723668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.722 [2024-07-24 17:32:42.723690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:56.722 [2024-07-24 17:32:42.723704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:31:56.722 [2024-07-24 17:32:42.723716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.722 [2024-07-24 17:32:42.723763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.722 [2024-07-24 17:32:42.723780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:56.722 [2024-07-24 17:32:42.723799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:56.722 [2024-07-24 17:32:42.723811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.722 [2024-07-24 17:32:42.723860] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:56.722 [2024-07-24 17:32:42.723879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.723 [2024-07-24 17:32:42.723891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:56.723 [2024-07-24 17:32:42.723904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:31:56.723 [2024-07-24 17:32:42.723915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.723 [2024-07-24 17:32:42.755248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.723 [2024-07-24 17:32:42.755316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:56.723 [2024-07-24 17:32:42.755338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.302 ms 00:31:56.723 [2024-07-24 17:32:42.755350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.723 [2024-07-24 17:32:42.755452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.723 [2024-07-24 17:32:42.755478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:56.723 [2024-07-24 17:32:42.755494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:31:56.723 [2024-07-24 17:32:42.755505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.723 [2024-07-24 17:32:42.757059] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 362.252 ms, result 0 00:32:40.246  Copying: 24/1024 [MB] (24 MBps) Copying: 49/1024 [MB] (24 MBps) Copying: 75/1024 [MB] (25 MBps) Copying: 99/1024 [MB] (24 MBps) Copying: 123/1024 [MB] (24 MBps) Copying: 148/1024 [MB] (24 MBps) Copying: 172/1024 [MB] (24 MBps) Copying: 195/1024 [MB] (23 MBps) Copying: 221/1024 [MB] (25 MBps) Copying: 248/1024 [MB] (27 MBps) Copying: 274/1024 [MB] (26 MBps) Copying: 299/1024 [MB] (24 MBps) Copying: 324/1024 [MB] (24 MBps) Copying: 347/1024 [MB] (23 MBps) Copying: 371/1024 [MB] (24 MBps) Copying: 396/1024 [MB] (24 MBps) Copying: 421/1024 [MB] (24 MBps) Copying: 446/1024 [MB] (25 MBps) Copying: 471/1024 [MB] (25 MBps) Copying: 497/1024 [MB] (25 MBps) Copying: 522/1024 [MB] (24 MBps) Copying: 546/1024 [MB] (24 MBps) Copying: 569/1024 [MB] (22 MBps) Copying: 592/1024 [MB] (23 MBps) Copying: 616/1024 [MB] (23 MBps) Copying: 638/1024 [MB] (22 MBps) Copying: 661/1024 [MB] (22 MBps) Copying: 685/1024 [MB] (23 MBps) Copying: 708/1024 [MB] (23 MBps) Copying: 732/1024 [MB] (23 MBps) Copying: 755/1024 [MB] (23 MBps) Copying: 779/1024 [MB] (23 MBps) Copying: 802/1024 [MB] (23 MBps) Copying: 826/1024 [MB] (23 MBps) Copying: 850/1024 [MB] (23 MBps) Copying: 874/1024 [MB] (23 MBps) Copying: 897/1024 [MB] (23 MBps) Copying: 921/1024 [MB] (23 MBps) Copying: 944/1024 [MB] (23 MBps) Copying: 968/1024 [MB] (23 MBps) Copying: 992/1024 [MB] (23 MBps) Copying: 1016/1024 [MB] (23 MBps) Copying: 1048228/1048576 [kB] (7720 kBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-24 17:33:26.195725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.246 [2024-07-24 17:33:26.196014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:40.246 [2024-07-24 17:33:26.196179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:40.246 [2024-07-24 17:33:26.196350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.246 [2024-07-24 17:33:26.199166] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:40.247 [2024-07-24 17:33:26.208164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.247 [2024-07-24 17:33:26.208220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:40.247 [2024-07-24 17:33:26.208244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.746 ms 00:32:40.247 [2024-07-24 17:33:26.208259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.247 [2024-07-24 17:33:26.223489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.247 [2024-07-24 17:33:26.223559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:40.247 [2024-07-24 17:33:26.223582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.317 ms 00:32:40.247 [2024-07-24 17:33:26.223597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.247 [2024-07-24 17:33:26.247590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.247 [2024-07-24 17:33:26.247660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:40.247 [2024-07-24 17:33:26.247683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.965 ms 00:32:40.247 [2024-07-24 17:33:26.247699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.247 [2024-07-24 17:33:26.255878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.247 [2024-07-24 17:33:26.255925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:40.247 [2024-07-24 17:33:26.255953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.131 ms 00:32:40.247 [2024-07-24 17:33:26.255967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.247 [2024-07-24 17:33:26.294894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.247 [2024-07-24 17:33:26.294948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:40.247 [2024-07-24 17:33:26.294969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.819 ms 00:32:40.247 [2024-07-24 17:33:26.294983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.247 [2024-07-24 17:33:26.317002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.247 [2024-07-24 17:33:26.317071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:40.247 [2024-07-24 17:33:26.317094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.955 ms 00:32:40.247 [2024-07-24 17:33:26.317110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.247 [2024-07-24 17:33:26.434386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.247 [2024-07-24 17:33:26.434470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:40.247 [2024-07-24 17:33:26.434505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 117.219 ms 00:32:40.247 [2024-07-24 17:33:26.434526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.247 [2024-07-24 17:33:26.465756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.247 [2024-07-24 17:33:26.465823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:32:40.247 [2024-07-24 17:33:26.465855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.196 ms 00:32:40.247 [2024-07-24 17:33:26.465866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.506 [2024-07-24 17:33:26.493739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.506 [2024-07-24 17:33:26.493796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:32:40.506 [2024-07-24 17:33:26.493826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.832 ms 00:32:40.506 [2024-07-24 17:33:26.493836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.506 [2024-07-24 17:33:26.520131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.506 [2024-07-24 17:33:26.520186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:40.506 [2024-07-24 17:33:26.520217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.255 ms 00:32:40.506 [2024-07-24 17:33:26.520227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.506 [2024-07-24 17:33:26.546010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.506 [2024-07-24 17:33:26.546066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:40.506 [2024-07-24 17:33:26.546097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.702 ms 00:32:40.506 [2024-07-24 17:33:26.546107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.506 [2024-07-24 17:33:26.546147] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:40.506 [2024-07-24 17:33:26.546176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 116992 / 261120 wr_cnt: 1 state: open 00:32:40.506 [2024-07-24 17:33:26.546191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:40.506 [2024-07-24 17:33:26.546970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.546982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.546993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:40.507 [2024-07-24 17:33:26.547497] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:40.507 [2024-07-24 17:33:26.547508] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f6835e3a-4cec-4e40-b073-a8ea29e11d28 00:32:40.507 [2024-07-24 17:33:26.547524] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 116992 00:32:40.507 [2024-07-24 17:33:26.547534] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 117952 00:32:40.507 [2024-07-24 17:33:26.547549] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 116992 00:32:40.507 [2024-07-24 17:33:26.547560] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0082 00:32:40.507 [2024-07-24 17:33:26.547571] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:40.507 [2024-07-24 17:33:26.547582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:40.507 [2024-07-24 17:33:26.547592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:40.507 [2024-07-24 17:33:26.547602] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:40.507 [2024-07-24 17:33:26.547611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:40.507 [2024-07-24 17:33:26.547622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.507 [2024-07-24 17:33:26.547633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:40.507 [2024-07-24 17:33:26.547655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.477 ms 00:32:40.507 [2024-07-24 17:33:26.547666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.507 [2024-07-24 17:33:26.562345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.507 [2024-07-24 17:33:26.562399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:40.507 [2024-07-24 17:33:26.562430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.627 ms 00:32:40.507 [2024-07-24 17:33:26.562440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.507 [2024-07-24 17:33:26.562917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.507 [2024-07-24 17:33:26.562942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:40.507 [2024-07-24 17:33:26.562956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:32:40.507 [2024-07-24 17:33:26.562967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.507 [2024-07-24 17:33:26.596218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.507 [2024-07-24 17:33:26.596275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:40.507 [2024-07-24 17:33:26.596306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.507 [2024-07-24 17:33:26.596316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.507 [2024-07-24 17:33:26.596375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.507 [2024-07-24 17:33:26.596388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:40.507 [2024-07-24 17:33:26.596399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.507 [2024-07-24 17:33:26.596409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.507 [2024-07-24 17:33:26.596544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.507 [2024-07-24 17:33:26.596563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:40.507 [2024-07-24 17:33:26.596576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.507 [2024-07-24 17:33:26.596587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.507 [2024-07-24 17:33:26.596609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.507 [2024-07-24 17:33:26.596621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:40.507 [2024-07-24 17:33:26.596632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.507 [2024-07-24 17:33:26.596642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.507 [2024-07-24 17:33:26.689229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.507 [2024-07-24 17:33:26.689293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:40.507 [2024-07-24 17:33:26.689325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.507 [2024-07-24 17:33:26.689336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.765 [2024-07-24 17:33:26.764056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.765 [2024-07-24 17:33:26.764129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:40.765 [2024-07-24 17:33:26.764161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.765 [2024-07-24 17:33:26.764172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.765 [2024-07-24 17:33:26.764269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.765 [2024-07-24 17:33:26.764293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:40.765 [2024-07-24 17:33:26.764304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.765 [2024-07-24 17:33:26.764314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.765 [2024-07-24 17:33:26.764418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.765 [2024-07-24 17:33:26.764436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:40.765 [2024-07-24 17:33:26.764449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.765 [2024-07-24 17:33:26.764460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.765 [2024-07-24 17:33:26.764605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.765 [2024-07-24 17:33:26.764629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:40.765 [2024-07-24 17:33:26.764641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.765 [2024-07-24 17:33:26.764652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.765 [2024-07-24 17:33:26.764705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.765 [2024-07-24 17:33:26.764722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:40.765 [2024-07-24 17:33:26.764774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.765 [2024-07-24 17:33:26.764789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.765 [2024-07-24 17:33:26.764848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.765 [2024-07-24 17:33:26.764863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:40.765 [2024-07-24 17:33:26.764881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.765 [2024-07-24 17:33:26.764891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.765 [2024-07-24 17:33:26.764957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.765 [2024-07-24 17:33:26.764973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:40.765 [2024-07-24 17:33:26.764984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.765 [2024-07-24 17:33:26.764995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.765 [2024-07-24 17:33:26.765164] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 570.703 ms, result 0 00:32:42.663 00:32:42.663 00:32:42.663 17:33:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:32:44.588 17:33:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:44.588 [2024-07-24 17:33:30.430777] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:32:44.588 [2024-07-24 17:33:30.430920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83925 ] 00:32:44.588 [2024-07-24 17:33:30.597642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.846 [2024-07-24 17:33:30.841709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.104 [2024-07-24 17:33:31.161454] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:45.104 [2024-07-24 17:33:31.161540] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:45.104 [2024-07-24 17:33:31.325507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.104 [2024-07-24 17:33:31.325603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:45.104 [2024-07-24 17:33:31.325623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:45.104 [2024-07-24 17:33:31.325634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.104 [2024-07-24 17:33:31.325720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.104 [2024-07-24 17:33:31.325738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:45.104 [2024-07-24 17:33:31.325750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:32:45.104 [2024-07-24 17:33:31.325765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.104 [2024-07-24 17:33:31.325798] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:45.104 [2024-07-24 17:33:31.326763] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:45.104 [2024-07-24 17:33:31.326798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.104 [2024-07-24 17:33:31.326812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:45.104 [2024-07-24 17:33:31.326825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:32:45.104 [2024-07-24 17:33:31.326836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.104 [2024-07-24 17:33:31.329127] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:45.363 [2024-07-24 17:33:31.344531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.363 [2024-07-24 17:33:31.344587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:45.363 [2024-07-24 17:33:31.344604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.406 ms 00:32:45.363 [2024-07-24 17:33:31.344616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.363 [2024-07-24 17:33:31.344703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.363 [2024-07-24 17:33:31.344730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:45.363 [2024-07-24 17:33:31.344743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:32:45.363 [2024-07-24 17:33:31.344753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.363 [2024-07-24 17:33:31.353812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.363 [2024-07-24 17:33:31.353860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:45.363 [2024-07-24 17:33:31.353873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.915 ms 00:32:45.363 [2024-07-24 17:33:31.353883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.363 [2024-07-24 17:33:31.353975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.363 [2024-07-24 17:33:31.353992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:45.363 [2024-07-24 17:33:31.354003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:32:45.363 [2024-07-24 17:33:31.354012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.363 [2024-07-24 17:33:31.354068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.363 [2024-07-24 17:33:31.354084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:45.363 [2024-07-24 17:33:31.354096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:45.363 [2024-07-24 17:33:31.354105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.363 [2024-07-24 17:33:31.354136] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:45.363 [2024-07-24 17:33:31.358526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.363 [2024-07-24 17:33:31.358555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:45.363 [2024-07-24 17:33:31.358567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.398 ms 00:32:45.363 [2024-07-24 17:33:31.358581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.363 [2024-07-24 17:33:31.358617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.363 [2024-07-24 17:33:31.358630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:45.363 [2024-07-24 17:33:31.358641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:45.363 [2024-07-24 17:33:31.358677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.363 [2024-07-24 17:33:31.358751] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:45.364 [2024-07-24 17:33:31.358783] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:45.364 [2024-07-24 17:33:31.358821] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:45.364 [2024-07-24 17:33:31.358842] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:32:45.364 [2024-07-24 17:33:31.358983] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:45.364 [2024-07-24 17:33:31.359030] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:45.364 [2024-07-24 17:33:31.359046] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:32:45.364 [2024-07-24 17:33:31.359061] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:45.364 [2024-07-24 17:33:31.359074] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:45.364 [2024-07-24 17:33:31.359086] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:45.364 [2024-07-24 17:33:31.359097] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:45.364 [2024-07-24 17:33:31.359107] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:45.364 [2024-07-24 17:33:31.359118] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:45.364 [2024-07-24 17:33:31.359136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.364 [2024-07-24 17:33:31.359149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:45.364 [2024-07-24 17:33:31.359161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:32:45.364 [2024-07-24 17:33:31.359172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.364 [2024-07-24 17:33:31.359265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.364 [2024-07-24 17:33:31.359280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:45.364 [2024-07-24 17:33:31.359307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:32:45.364 [2024-07-24 17:33:31.359332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.364 [2024-07-24 17:33:31.359444] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:45.364 [2024-07-24 17:33:31.359464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:45.364 [2024-07-24 17:33:31.359475] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:45.364 [2024-07-24 17:33:31.359500] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:45.364 [2024-07-24 17:33:31.359512] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:45.364 [2024-07-24 17:33:31.359535] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:45.364 [2024-07-24 17:33:31.359545] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:45.364 [2024-07-24 17:33:31.359554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:45.364 [2024-07-24 17:33:31.359564] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:45.364 [2024-07-24 17:33:31.359573] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:45.364 [2024-07-24 17:33:31.359582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:45.364 [2024-07-24 17:33:31.359592] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:45.364 [2024-07-24 17:33:31.359601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:45.364 [2024-07-24 17:33:31.359611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:45.364 [2024-07-24 17:33:31.359620] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:45.364 [2024-07-24 17:33:31.359629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:45.364 [2024-07-24 17:33:31.359638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:45.364 [2024-07-24 17:33:31.359648] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:45.364 [2024-07-24 17:33:31.359657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:45.364 [2024-07-24 17:33:31.359667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:45.364 [2024-07-24 17:33:31.359704] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:45.364 [2024-07-24 17:33:31.359714] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:45.364 [2024-07-24 17:33:31.359723] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:45.364 [2024-07-24 17:33:31.359733] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:45.364 [2024-07-24 17:33:31.359755] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:45.364 [2024-07-24 17:33:31.359769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:45.364 [2024-07-24 17:33:31.359779] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:45.364 [2024-07-24 17:33:31.359789] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:45.364 [2024-07-24 17:33:31.359798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:45.364 [2024-07-24 17:33:31.359808] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:45.364 [2024-07-24 17:33:31.359831] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:45.364 [2024-07-24 17:33:31.359840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:45.364 [2024-07-24 17:33:31.359850] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:45.364 [2024-07-24 17:33:31.359859] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:45.364 [2024-07-24 17:33:31.359868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:45.364 [2024-07-24 17:33:31.359878] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:45.364 [2024-07-24 17:33:31.359887] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:45.364 [2024-07-24 17:33:31.359896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:45.364 [2024-07-24 17:33:31.359905] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:45.364 [2024-07-24 17:33:31.359914] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:45.364 [2024-07-24 17:33:31.359923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:45.364 [2024-07-24 17:33:31.359932] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:45.364 [2024-07-24 17:33:31.359942] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:45.364 [2024-07-24 17:33:31.359950] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:45.364 [2024-07-24 17:33:31.359960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:45.364 [2024-07-24 17:33:31.359970] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:45.364 [2024-07-24 17:33:31.359981] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:45.364 [2024-07-24 17:33:31.359991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:45.364 [2024-07-24 17:33:31.360001] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:45.364 [2024-07-24 17:33:31.360017] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:45.364 [2024-07-24 17:33:31.360027] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:45.364 [2024-07-24 17:33:31.360036] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:45.364 [2024-07-24 17:33:31.360046] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:45.364 [2024-07-24 17:33:31.360056] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:45.364 [2024-07-24 17:33:31.360084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:45.364 [2024-07-24 17:33:31.360095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:45.364 [2024-07-24 17:33:31.360106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:45.364 [2024-07-24 17:33:31.360118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:45.364 [2024-07-24 17:33:31.360128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:45.364 [2024-07-24 17:33:31.360138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:45.364 [2024-07-24 17:33:31.360148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:45.364 [2024-07-24 17:33:31.360158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:45.364 [2024-07-24 17:33:31.360168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:45.364 [2024-07-24 17:33:31.360178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:45.364 [2024-07-24 17:33:31.360188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:45.364 [2024-07-24 17:33:31.360198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:45.364 [2024-07-24 17:33:31.360208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:45.364 [2024-07-24 17:33:31.360217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:45.364 [2024-07-24 17:33:31.360227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:45.364 [2024-07-24 17:33:31.360237] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:45.364 [2024-07-24 17:33:31.360253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:45.364 [2024-07-24 17:33:31.360264] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:45.364 [2024-07-24 17:33:31.360274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:45.364 [2024-07-24 17:33:31.360284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:45.364 [2024-07-24 17:33:31.360294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:45.364 [2024-07-24 17:33:31.360305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.365 [2024-07-24 17:33:31.360315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:45.365 [2024-07-24 17:33:31.360326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:32:45.365 [2024-07-24 17:33:31.360335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.365 [2024-07-24 17:33:31.403208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.365 [2024-07-24 17:33:31.403275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:45.365 [2024-07-24 17:33:31.403294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.812 ms 00:32:45.365 [2024-07-24 17:33:31.403305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.365 [2024-07-24 17:33:31.403416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.365 [2024-07-24 17:33:31.403431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:45.365 [2024-07-24 17:33:31.403442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:32:45.365 [2024-07-24 17:33:31.403452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.365 [2024-07-24 17:33:31.444506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.365 [2024-07-24 17:33:31.444575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:45.365 [2024-07-24 17:33:31.444595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.951 ms 00:32:45.365 [2024-07-24 17:33:31.444607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.365 [2024-07-24 17:33:31.444687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.365 [2024-07-24 17:33:31.444706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:45.365 [2024-07-24 17:33:31.444720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:45.365 [2024-07-24 17:33:31.444737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.365 [2024-07-24 17:33:31.445437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.365 [2024-07-24 17:33:31.445461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:45.365 [2024-07-24 17:33:31.445475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:32:45.365 [2024-07-24 17:33:31.445485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.365 [2024-07-24 17:33:31.445671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.365 [2024-07-24 17:33:31.445689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:45.365 [2024-07-24 17:33:31.445702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:32:45.365 [2024-07-24 17:33:31.445732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.365 [2024-07-24 17:33:31.462399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.365 [2024-07-24 17:33:31.462449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:45.365 [2024-07-24 17:33:31.462468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.640 ms 00:32:45.365 [2024-07-24 17:33:31.462479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.365 [2024-07-24 17:33:31.478217] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:45.365 [2024-07-24 17:33:31.478268] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:45.365 [2024-07-24 17:33:31.478284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.365 [2024-07-24 17:33:31.478296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:45.365 [2024-07-24 17:33:31.478307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.610 ms 00:32:45.365 [2024-07-24 17:33:31.478316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.365 [2024-07-24 17:33:31.505066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.365 [2024-07-24 17:33:31.505123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:45.365 [2024-07-24 17:33:31.505138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.707 ms 00:32:45.365 [2024-07-24 17:33:31.505149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.365 [2024-07-24 17:33:31.520085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.365 [2024-07-24 17:33:31.520136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:45.365 [2024-07-24 17:33:31.520150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.891 ms 00:32:45.365 [2024-07-24 17:33:31.520160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.365 [2024-07-24 17:33:31.534267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.365 [2024-07-24 17:33:31.534318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:45.365 [2024-07-24 17:33:31.534332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.067 ms 00:32:45.365 [2024-07-24 17:33:31.534341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.365 [2024-07-24 17:33:31.535327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.365 [2024-07-24 17:33:31.535378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:45.365 [2024-07-24 17:33:31.535391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:32:45.365 [2024-07-24 17:33:31.535407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.624 [2024-07-24 17:33:31.606730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.624 [2024-07-24 17:33:31.606808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:45.624 [2024-07-24 17:33:31.606849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.299 ms 00:32:45.624 [2024-07-24 17:33:31.606861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.624 [2024-07-24 17:33:31.619208] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:45.624 [2024-07-24 17:33:31.622549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.624 [2024-07-24 17:33:31.622593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:45.624 [2024-07-24 17:33:31.622609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.624 ms 00:32:45.624 [2024-07-24 17:33:31.622619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.624 [2024-07-24 17:33:31.622737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.624 [2024-07-24 17:33:31.622758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:45.624 [2024-07-24 17:33:31.622770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:45.624 [2024-07-24 17:33:31.622785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.624 [2024-07-24 17:33:31.624908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.624 [2024-07-24 17:33:31.624955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:45.624 [2024-07-24 17:33:31.624970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.039 ms 00:32:45.624 [2024-07-24 17:33:31.624995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.624 [2024-07-24 17:33:31.625045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.624 [2024-07-24 17:33:31.625060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:45.624 [2024-07-24 17:33:31.625071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:45.624 [2024-07-24 17:33:31.625081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.624 [2024-07-24 17:33:31.625118] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:45.624 [2024-07-24 17:33:31.625136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.624 [2024-07-24 17:33:31.625147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:45.624 [2024-07-24 17:33:31.625158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:32:45.624 [2024-07-24 17:33:31.625168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.624 [2024-07-24 17:33:31.653976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.624 [2024-07-24 17:33:31.654029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:45.624 [2024-07-24 17:33:31.654051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.785 ms 00:32:45.624 [2024-07-24 17:33:31.654065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.624 [2024-07-24 17:33:31.654148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.624 [2024-07-24 17:33:31.654166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:45.624 [2024-07-24 17:33:31.654178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:45.624 [2024-07-24 17:33:31.654187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.624 [2024-07-24 17:33:31.661584] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 334.166 ms, result 0 00:33:24.131  Copying: 932/1048576 [kB] (932 kBps) Copying: 4700/1048576 [kB] (3768 kBps) Copying: 25/1024 [MB] (21 MBps) Copying: 54/1024 [MB] (28 MBps) Copying: 81/1024 [MB] (27 MBps) Copying: 110/1024 [MB] (29 MBps) Copying: 137/1024 [MB] (27 MBps) Copying: 165/1024 [MB] (28 MBps) Copying: 192/1024 [MB] (27 MBps) Copying: 221/1024 [MB] (28 MBps) Copying: 249/1024 [MB] (28 MBps) Copying: 277/1024 [MB] (27 MBps) Copying: 306/1024 [MB] (28 MBps) Copying: 335/1024 [MB] (28 MBps) Copying: 363/1024 [MB] (28 MBps) Copying: 392/1024 [MB] (28 MBps) Copying: 419/1024 [MB] (27 MBps) Copying: 449/1024 [MB] (29 MBps) Copying: 478/1024 [MB] (29 MBps) Copying: 507/1024 [MB] (29 MBps) Copying: 536/1024 [MB] (28 MBps) Copying: 563/1024 [MB] (27 MBps) Copying: 592/1024 [MB] (28 MBps) Copying: 620/1024 [MB] (28 MBps) Copying: 649/1024 [MB] (28 MBps) Copying: 678/1024 [MB] (28 MBps) Copying: 706/1024 [MB] (28 MBps) Copying: 734/1024 [MB] (28 MBps) Copying: 762/1024 [MB] (28 MBps) Copying: 790/1024 [MB] (28 MBps) Copying: 819/1024 [MB] (28 MBps) Copying: 847/1024 [MB] (28 MBps) Copying: 875/1024 [MB] (27 MBps) Copying: 903/1024 [MB] (28 MBps) Copying: 931/1024 [MB] (27 MBps) Copying: 959/1024 [MB] (27 MBps) Copying: 987/1024 [MB] (28 MBps) Copying: 1014/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-24 17:34:10.209595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.131 [2024-07-24 17:34:10.209713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:24.131 [2024-07-24 17:34:10.209742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:24.131 [2024-07-24 17:34:10.209759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.131 [2024-07-24 17:34:10.209800] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:24.131 [2024-07-24 17:34:10.214946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.131 [2024-07-24 17:34:10.214994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:24.131 [2024-07-24 17:34:10.215019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.111 ms 00:33:24.131 [2024-07-24 17:34:10.215042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.131 [2024-07-24 17:34:10.215293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.131 [2024-07-24 17:34:10.215316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:24.131 [2024-07-24 17:34:10.215337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:33:24.131 [2024-07-24 17:34:10.215348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.131 [2024-07-24 17:34:10.227983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.131 [2024-07-24 17:34:10.228039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:24.131 [2024-07-24 17:34:10.228059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.611 ms 00:33:24.131 [2024-07-24 17:34:10.228071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.131 [2024-07-24 17:34:10.235689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.131 [2024-07-24 17:34:10.235758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:24.131 [2024-07-24 17:34:10.235796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.576 ms 00:33:24.131 [2024-07-24 17:34:10.235816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.131 [2024-07-24 17:34:10.269101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.131 [2024-07-24 17:34:10.269144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:24.131 [2024-07-24 17:34:10.269162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.177 ms 00:33:24.131 [2024-07-24 17:34:10.269173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.131 [2024-07-24 17:34:10.287568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.131 [2024-07-24 17:34:10.287624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:24.131 [2024-07-24 17:34:10.287640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.350 ms 00:33:24.131 [2024-07-24 17:34:10.287652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.131 [2024-07-24 17:34:10.292019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.131 [2024-07-24 17:34:10.292060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:24.131 [2024-07-24 17:34:10.292076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.295 ms 00:33:24.131 [2024-07-24 17:34:10.292087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.131 [2024-07-24 17:34:10.322058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.131 [2024-07-24 17:34:10.322109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:33:24.131 [2024-07-24 17:34:10.322137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.944 ms 00:33:24.131 [2024-07-24 17:34:10.322147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.131 [2024-07-24 17:34:10.353032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.131 [2024-07-24 17:34:10.353096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:33:24.131 [2024-07-24 17:34:10.353110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.846 ms 00:33:24.131 [2024-07-24 17:34:10.353119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.389 [2024-07-24 17:34:10.382546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.389 [2024-07-24 17:34:10.382595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:24.389 [2024-07-24 17:34:10.382609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.388 ms 00:33:24.389 [2024-07-24 17:34:10.382633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.389 [2024-07-24 17:34:10.409291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.389 [2024-07-24 17:34:10.409352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:24.389 [2024-07-24 17:34:10.409368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.535 ms 00:33:24.389 [2024-07-24 17:34:10.409377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.389 [2024-07-24 17:34:10.409424] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:24.389 [2024-07-24 17:34:10.409446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:24.389 [2024-07-24 17:34:10.409460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 4096 / 261120 wr_cnt: 1 state: open 00:33:24.389 [2024-07-24 17:34:10.409472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:24.389 [2024-07-24 17:34:10.409923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.409933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.409943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.409954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.409964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.409974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.409984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.409994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:24.390 [2024-07-24 17:34:10.410645] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:24.390 [2024-07-24 17:34:10.410655] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f6835e3a-4cec-4e40-b073-a8ea29e11d28 00:33:24.390 [2024-07-24 17:34:10.410671] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 265216 00:33:24.390 [2024-07-24 17:34:10.410681] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 150208 00:33:24.390 [2024-07-24 17:34:10.410690] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 148224 00:33:24.390 [2024-07-24 17:34:10.410704] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0134 00:33:24.390 [2024-07-24 17:34:10.410714] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:24.390 [2024-07-24 17:34:10.410724] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:24.390 [2024-07-24 17:34:10.410744] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:24.390 [2024-07-24 17:34:10.410764] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:24.390 [2024-07-24 17:34:10.410774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:24.390 [2024-07-24 17:34:10.410784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.390 [2024-07-24 17:34:10.410794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:24.390 [2024-07-24 17:34:10.410804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.361 ms 00:33:24.390 [2024-07-24 17:34:10.410814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.390 [2024-07-24 17:34:10.426272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.390 [2024-07-24 17:34:10.426324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:24.390 [2024-07-24 17:34:10.426338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.422 ms 00:33:24.390 [2024-07-24 17:34:10.426369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.390 [2024-07-24 17:34:10.426931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.390 [2024-07-24 17:34:10.426959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:24.390 [2024-07-24 17:34:10.426973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:33:24.390 [2024-07-24 17:34:10.426984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.390 [2024-07-24 17:34:10.463546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.390 [2024-07-24 17:34:10.463590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:24.390 [2024-07-24 17:34:10.463604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.390 [2024-07-24 17:34:10.463616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.390 [2024-07-24 17:34:10.463714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.390 [2024-07-24 17:34:10.463732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:24.390 [2024-07-24 17:34:10.463744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.390 [2024-07-24 17:34:10.463754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.390 [2024-07-24 17:34:10.463878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.390 [2024-07-24 17:34:10.463897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:24.390 [2024-07-24 17:34:10.463910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.390 [2024-07-24 17:34:10.463921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.390 [2024-07-24 17:34:10.463943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.391 [2024-07-24 17:34:10.463957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:24.391 [2024-07-24 17:34:10.463968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.391 [2024-07-24 17:34:10.463979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.391 [2024-07-24 17:34:10.565213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.391 [2024-07-24 17:34:10.565278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:24.391 [2024-07-24 17:34:10.565295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.391 [2024-07-24 17:34:10.565306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.648 [2024-07-24 17:34:10.648585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.648 [2024-07-24 17:34:10.648631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:24.648 [2024-07-24 17:34:10.648669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.648 [2024-07-24 17:34:10.648715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.648 [2024-07-24 17:34:10.648828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.648 [2024-07-24 17:34:10.648846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:24.648 [2024-07-24 17:34:10.648879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.648 [2024-07-24 17:34:10.648891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.648 [2024-07-24 17:34:10.648940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.648 [2024-07-24 17:34:10.648956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:24.648 [2024-07-24 17:34:10.648968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.648 [2024-07-24 17:34:10.648979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.648 [2024-07-24 17:34:10.649132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.648 [2024-07-24 17:34:10.649157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:24.648 [2024-07-24 17:34:10.649176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.648 [2024-07-24 17:34:10.649187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.648 [2024-07-24 17:34:10.649235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.648 [2024-07-24 17:34:10.649251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:24.648 [2024-07-24 17:34:10.649264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.648 [2024-07-24 17:34:10.649274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.648 [2024-07-24 17:34:10.649320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.648 [2024-07-24 17:34:10.649335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:24.648 [2024-07-24 17:34:10.649347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.648 [2024-07-24 17:34:10.649363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.648 [2024-07-24 17:34:10.649428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.648 [2024-07-24 17:34:10.649443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:24.648 [2024-07-24 17:34:10.649471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.648 [2024-07-24 17:34:10.649495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.648 [2024-07-24 17:34:10.649652] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 440.033 ms, result 0 00:33:25.582 00:33:25.582 00:33:25.582 17:34:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:28.111 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:28.111 17:34:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:28.111 [2024-07-24 17:34:13.885295] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:33:28.111 [2024-07-24 17:34:13.885476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84361 ] 00:33:28.111 [2024-07-24 17:34:14.055415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.111 [2024-07-24 17:34:14.313521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.677 [2024-07-24 17:34:14.656264] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:28.677 [2024-07-24 17:34:14.656344] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:28.677 [2024-07-24 17:34:14.820093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.677 [2024-07-24 17:34:14.820159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:28.677 [2024-07-24 17:34:14.820178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:28.677 [2024-07-24 17:34:14.820190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.677 [2024-07-24 17:34:14.820256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.677 [2024-07-24 17:34:14.820275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:28.677 [2024-07-24 17:34:14.820288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:33:28.677 [2024-07-24 17:34:14.820304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.678 [2024-07-24 17:34:14.820337] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:28.678 [2024-07-24 17:34:14.821253] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:28.678 [2024-07-24 17:34:14.821287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.678 [2024-07-24 17:34:14.821299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:28.678 [2024-07-24 17:34:14.821311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:33:28.678 [2024-07-24 17:34:14.821320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.678 [2024-07-24 17:34:14.823327] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:28.678 [2024-07-24 17:34:14.838853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.678 [2024-07-24 17:34:14.838903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:28.678 [2024-07-24 17:34:14.838919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.527 ms 00:33:28.678 [2024-07-24 17:34:14.838929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.678 [2024-07-24 17:34:14.838994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.678 [2024-07-24 17:34:14.839057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:28.678 [2024-07-24 17:34:14.839083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:33:28.678 [2024-07-24 17:34:14.839102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.678 [2024-07-24 17:34:14.848315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.678 [2024-07-24 17:34:14.848354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:28.678 [2024-07-24 17:34:14.848369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.120 ms 00:33:28.678 [2024-07-24 17:34:14.848381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.678 [2024-07-24 17:34:14.848510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.678 [2024-07-24 17:34:14.848528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:28.678 [2024-07-24 17:34:14.848541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:33:28.678 [2024-07-24 17:34:14.848551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.678 [2024-07-24 17:34:14.848626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.678 [2024-07-24 17:34:14.848652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:28.678 [2024-07-24 17:34:14.848663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:33:28.678 [2024-07-24 17:34:14.848673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.678 [2024-07-24 17:34:14.848724] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:28.678 [2024-07-24 17:34:14.853580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.678 [2024-07-24 17:34:14.853625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:28.678 [2024-07-24 17:34:14.853647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.865 ms 00:33:28.678 [2024-07-24 17:34:14.853657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.678 [2024-07-24 17:34:14.853718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.678 [2024-07-24 17:34:14.853734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:28.678 [2024-07-24 17:34:14.853746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:28.678 [2024-07-24 17:34:14.853756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.678 [2024-07-24 17:34:14.853813] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:28.678 [2024-07-24 17:34:14.853844] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:28.678 [2024-07-24 17:34:14.853915] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:28.678 [2024-07-24 17:34:14.853938] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:33:28.678 [2024-07-24 17:34:14.854033] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:28.678 [2024-07-24 17:34:14.854047] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:28.678 [2024-07-24 17:34:14.854061] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:33:28.678 [2024-07-24 17:34:14.854075] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:28.678 [2024-07-24 17:34:14.854088] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:28.678 [2024-07-24 17:34:14.854100] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:28.678 [2024-07-24 17:34:14.854110] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:28.678 [2024-07-24 17:34:14.854120] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:28.678 [2024-07-24 17:34:14.854130] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:28.678 [2024-07-24 17:34:14.854141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.678 [2024-07-24 17:34:14.854156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:28.678 [2024-07-24 17:34:14.854167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:33:28.678 [2024-07-24 17:34:14.854177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.678 [2024-07-24 17:34:14.854261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.678 [2024-07-24 17:34:14.854276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:28.678 [2024-07-24 17:34:14.854286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:33:28.678 [2024-07-24 17:34:14.854296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.678 [2024-07-24 17:34:14.854392] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:28.678 [2024-07-24 17:34:14.854419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:28.678 [2024-07-24 17:34:14.854438] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:28.678 [2024-07-24 17:34:14.854449] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:28.678 [2024-07-24 17:34:14.854460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:28.678 [2024-07-24 17:34:14.854471] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:28.678 [2024-07-24 17:34:14.854482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:28.678 [2024-07-24 17:34:14.854491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:28.678 [2024-07-24 17:34:14.854501] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:28.678 [2024-07-24 17:34:14.854511] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:28.678 [2024-07-24 17:34:14.854520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:28.678 [2024-07-24 17:34:14.854530] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:28.678 [2024-07-24 17:34:14.854539] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:28.678 [2024-07-24 17:34:14.854548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:28.678 [2024-07-24 17:34:14.854558] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:28.678 [2024-07-24 17:34:14.854568] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:28.678 [2024-07-24 17:34:14.854577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:28.678 [2024-07-24 17:34:14.854587] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:28.678 [2024-07-24 17:34:14.854597] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:28.678 [2024-07-24 17:34:14.854607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:28.678 [2024-07-24 17:34:14.854628] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:28.678 [2024-07-24 17:34:14.854638] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:28.678 [2024-07-24 17:34:14.854662] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:28.678 [2024-07-24 17:34:14.854674] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:28.678 [2024-07-24 17:34:14.854700] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:28.678 [2024-07-24 17:34:14.854710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:28.678 [2024-07-24 17:34:14.854720] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:28.678 [2024-07-24 17:34:14.854730] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:28.678 [2024-07-24 17:34:14.854739] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:28.678 [2024-07-24 17:34:14.854749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:28.678 [2024-07-24 17:34:14.854759] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:28.678 [2024-07-24 17:34:14.854769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:28.678 [2024-07-24 17:34:14.854779] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:28.678 [2024-07-24 17:34:14.854789] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:28.678 [2024-07-24 17:34:14.854799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:28.678 [2024-07-24 17:34:14.854809] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:28.678 [2024-07-24 17:34:14.854818] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:28.678 [2024-07-24 17:34:14.854830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:28.678 [2024-07-24 17:34:14.854841] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:28.678 [2024-07-24 17:34:14.854852] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:28.678 [2024-07-24 17:34:14.854861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:28.678 [2024-07-24 17:34:14.854887] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:28.678 [2024-07-24 17:34:14.854896] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:28.678 [2024-07-24 17:34:14.854905] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:28.678 [2024-07-24 17:34:14.854917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:28.678 [2024-07-24 17:34:14.854928] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:28.679 [2024-07-24 17:34:14.854938] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:28.679 [2024-07-24 17:34:14.854953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:28.679 [2024-07-24 17:34:14.854963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:28.679 [2024-07-24 17:34:14.854973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:28.679 [2024-07-24 17:34:14.854983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:28.679 [2024-07-24 17:34:14.854999] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:28.679 [2024-07-24 17:34:14.855038] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:28.679 [2024-07-24 17:34:14.855052] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:28.679 [2024-07-24 17:34:14.855068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:28.679 [2024-07-24 17:34:14.855082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:28.679 [2024-07-24 17:34:14.855094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:28.679 [2024-07-24 17:34:14.855106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:28.679 [2024-07-24 17:34:14.855117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:28.679 [2024-07-24 17:34:14.855128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:28.679 [2024-07-24 17:34:14.855140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:28.679 [2024-07-24 17:34:14.855151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:28.679 [2024-07-24 17:34:14.855163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:28.679 [2024-07-24 17:34:14.855175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:28.679 [2024-07-24 17:34:14.855186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:28.679 [2024-07-24 17:34:14.855198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:28.679 [2024-07-24 17:34:14.855209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:28.679 [2024-07-24 17:34:14.855221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:28.679 [2024-07-24 17:34:14.855234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:28.679 [2024-07-24 17:34:14.855246] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:28.679 [2024-07-24 17:34:14.855260] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:28.679 [2024-07-24 17:34:14.855289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:28.679 [2024-07-24 17:34:14.855301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:28.679 [2024-07-24 17:34:14.855313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:28.679 [2024-07-24 17:34:14.855325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:28.679 [2024-07-24 17:34:14.855363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.679 [2024-07-24 17:34:14.855374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:28.679 [2024-07-24 17:34:14.855385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 00:33:28.679 [2024-07-24 17:34:14.855396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.679 [2024-07-24 17:34:14.903210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.679 [2024-07-24 17:34:14.903269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:28.679 [2024-07-24 17:34:14.903291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.751 ms 00:33:28.679 [2024-07-24 17:34:14.903304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.679 [2024-07-24 17:34:14.903456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.679 [2024-07-24 17:34:14.903472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:28.679 [2024-07-24 17:34:14.903484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:33:28.679 [2024-07-24 17:34:14.903494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.939 [2024-07-24 17:34:14.944788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.939 [2024-07-24 17:34:14.944865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:28.939 [2024-07-24 17:34:14.944885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.201 ms 00:33:28.939 [2024-07-24 17:34:14.944895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.939 [2024-07-24 17:34:14.944970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.939 [2024-07-24 17:34:14.944985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:28.939 [2024-07-24 17:34:14.944997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:28.939 [2024-07-24 17:34:14.945014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.939 [2024-07-24 17:34:14.945725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.939 [2024-07-24 17:34:14.945748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:28.939 [2024-07-24 17:34:14.945762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:33:28.939 [2024-07-24 17:34:14.945772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.939 [2024-07-24 17:34:14.945951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.939 [2024-07-24 17:34:14.945970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:28.939 [2024-07-24 17:34:14.945981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:33:28.939 [2024-07-24 17:34:14.945991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.939 [2024-07-24 17:34:14.963598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.939 [2024-07-24 17:34:14.963680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:28.939 [2024-07-24 17:34:14.963701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.573 ms 00:33:28.939 [2024-07-24 17:34:14.963718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.939 [2024-07-24 17:34:14.978965] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:28.939 [2024-07-24 17:34:14.979062] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:28.939 [2024-07-24 17:34:14.979094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.939 [2024-07-24 17:34:14.979107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:28.939 [2024-07-24 17:34:14.979123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.189 ms 00:33:28.939 [2024-07-24 17:34:14.979135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.939 [2024-07-24 17:34:15.008516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.939 [2024-07-24 17:34:15.008624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:28.939 [2024-07-24 17:34:15.008645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.302 ms 00:33:28.939 [2024-07-24 17:34:15.008656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.939 [2024-07-24 17:34:15.024216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.939 [2024-07-24 17:34:15.024297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:28.939 [2024-07-24 17:34:15.024317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.423 ms 00:33:28.940 [2024-07-24 17:34:15.024329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.940 [2024-07-24 17:34:15.037975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.940 [2024-07-24 17:34:15.038046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:28.940 [2024-07-24 17:34:15.038064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.568 ms 00:33:28.940 [2024-07-24 17:34:15.038073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.940 [2024-07-24 17:34:15.039062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.940 [2024-07-24 17:34:15.039094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:28.940 [2024-07-24 17:34:15.039110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:33:28.940 [2024-07-24 17:34:15.039121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.940 [2024-07-24 17:34:15.118024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.940 [2024-07-24 17:34:15.118094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:28.940 [2024-07-24 17:34:15.118117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.869 ms 00:33:28.940 [2024-07-24 17:34:15.118137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.940 [2024-07-24 17:34:15.133472] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:28.940 [2024-07-24 17:34:15.138203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.940 [2024-07-24 17:34:15.138258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:28.940 [2024-07-24 17:34:15.138276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.974 ms 00:33:28.940 [2024-07-24 17:34:15.138286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.940 [2024-07-24 17:34:15.138423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.940 [2024-07-24 17:34:15.138442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:28.940 [2024-07-24 17:34:15.138454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:33:28.940 [2024-07-24 17:34:15.138465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.940 [2024-07-24 17:34:15.139626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.940 [2024-07-24 17:34:15.139685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:28.940 [2024-07-24 17:34:15.139701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:33:28.940 [2024-07-24 17:34:15.139713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.940 [2024-07-24 17:34:15.139751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.940 [2024-07-24 17:34:15.139766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:28.940 [2024-07-24 17:34:15.139779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:28.940 [2024-07-24 17:34:15.139790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.940 [2024-07-24 17:34:15.139842] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:28.940 [2024-07-24 17:34:15.139858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.940 [2024-07-24 17:34:15.139874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:28.940 [2024-07-24 17:34:15.139887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:33:28.940 [2024-07-24 17:34:15.139898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.940 [2024-07-24 17:34:15.170351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.940 [2024-07-24 17:34:15.170424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:28.940 [2024-07-24 17:34:15.170442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.424 ms 00:33:28.940 [2024-07-24 17:34:15.170462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.940 [2024-07-24 17:34:15.170571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.940 [2024-07-24 17:34:15.170589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:28.940 [2024-07-24 17:34:15.170602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:33:28.940 [2024-07-24 17:34:15.170612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.940 [2024-07-24 17:34:15.172248] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 351.542 ms, result 0 00:34:13.258  Copying: 24/1024 [MB] (24 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 93/1024 [MB] (22 MBps) Copying: 117/1024 [MB] (24 MBps) Copying: 141/1024 [MB] (24 MBps) Copying: 165/1024 [MB] (23 MBps) Copying: 188/1024 [MB] (23 MBps) Copying: 211/1024 [MB] (23 MBps) Copying: 234/1024 [MB] (22 MBps) Copying: 257/1024 [MB] (23 MBps) Copying: 280/1024 [MB] (22 MBps) Copying: 303/1024 [MB] (23 MBps) Copying: 327/1024 [MB] (23 MBps) Copying: 350/1024 [MB] (23 MBps) Copying: 374/1024 [MB] (23 MBps) Copying: 397/1024 [MB] (22 MBps) Copying: 420/1024 [MB] (23 MBps) Copying: 444/1024 [MB] (23 MBps) Copying: 467/1024 [MB] (23 MBps) Copying: 489/1024 [MB] (22 MBps) Copying: 513/1024 [MB] (23 MBps) Copying: 537/1024 [MB] (23 MBps) Copying: 559/1024 [MB] (22 MBps) Copying: 583/1024 [MB] (23 MBps) Copying: 606/1024 [MB] (23 MBps) Copying: 630/1024 [MB] (23 MBps) Copying: 652/1024 [MB] (22 MBps) Copying: 676/1024 [MB] (23 MBps) Copying: 699/1024 [MB] (23 MBps) Copying: 723/1024 [MB] (23 MBps) Copying: 746/1024 [MB] (23 MBps) Copying: 769/1024 [MB] (23 MBps) Copying: 792/1024 [MB] (22 MBps) Copying: 815/1024 [MB] (23 MBps) Copying: 838/1024 [MB] (22 MBps) Copying: 862/1024 [MB] (23 MBps) Copying: 886/1024 [MB] (23 MBps) Copying: 909/1024 [MB] (23 MBps) Copying: 933/1024 [MB] (23 MBps) Copying: 957/1024 [MB] (23 MBps) Copying: 981/1024 [MB] (24 MBps) Copying: 1004/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-24 17:34:59.304337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.258 [2024-07-24 17:34:59.304676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:13.258 [2024-07-24 17:34:59.304833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:13.258 [2024-07-24 17:34:59.304968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.258 [2024-07-24 17:34:59.305046] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:13.258 [2024-07-24 17:34:59.309463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.258 [2024-07-24 17:34:59.309666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:13.258 [2024-07-24 17:34:59.309787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.238 ms 00:34:13.258 [2024-07-24 17:34:59.309856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.258 [2024-07-24 17:34:59.310242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.258 [2024-07-24 17:34:59.310390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:13.258 [2024-07-24 17:34:59.310504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:34:13.258 [2024-07-24 17:34:59.310525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.258 [2024-07-24 17:34:59.314848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.258 [2024-07-24 17:34:59.315024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:13.258 [2024-07-24 17:34:59.315136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.296 ms 00:34:13.258 [2024-07-24 17:34:59.315183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.258 [2024-07-24 17:34:59.321332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.258 [2024-07-24 17:34:59.321499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:13.258 [2024-07-24 17:34:59.321625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.983 ms 00:34:13.258 [2024-07-24 17:34:59.321689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.258 [2024-07-24 17:34:59.350758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.258 [2024-07-24 17:34:59.350968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:13.258 [2024-07-24 17:34:59.351109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.886 ms 00:34:13.258 [2024-07-24 17:34:59.351157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.259 [2024-07-24 17:34:59.367568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.259 [2024-07-24 17:34:59.367806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:13.259 [2024-07-24 17:34:59.367939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.265 ms 00:34:13.259 [2024-07-24 17:34:59.368052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.259 [2024-07-24 17:34:59.372558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.259 [2024-07-24 17:34:59.372789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:13.259 [2024-07-24 17:34:59.372906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.437 ms 00:34:13.259 [2024-07-24 17:34:59.372952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.259 [2024-07-24 17:34:59.401106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.259 [2024-07-24 17:34:59.401296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:34:13.259 [2024-07-24 17:34:59.401339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.043 ms 00:34:13.259 [2024-07-24 17:34:59.401350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.259 [2024-07-24 17:34:59.428173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.259 [2024-07-24 17:34:59.428247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:34:13.259 [2024-07-24 17:34:59.428278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.781 ms 00:34:13.259 [2024-07-24 17:34:59.428288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.259 [2024-07-24 17:34:59.454002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.259 [2024-07-24 17:34:59.454042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:13.259 [2024-07-24 17:34:59.454086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.675 ms 00:34:13.259 [2024-07-24 17:34:59.454096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.259 [2024-07-24 17:34:59.481098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.259 [2024-07-24 17:34:59.481155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:13.259 [2024-07-24 17:34:59.481187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.920 ms 00:34:13.259 [2024-07-24 17:34:59.481197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.259 [2024-07-24 17:34:59.481236] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:13.259 [2024-07-24 17:34:59.481258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:34:13.259 [2024-07-24 17:34:59.481272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 4096 / 261120 wr_cnt: 1 state: open 00:34:13.259 [2024-07-24 17:34:59.481284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:13.259 [2024-07-24 17:34:59.481975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.481987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:13.260 [2024-07-24 17:34:59.482571] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:13.260 [2024-07-24 17:34:59.482584] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f6835e3a-4cec-4e40-b073-a8ea29e11d28 00:34:13.260 [2024-07-24 17:34:59.482603] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 265216 00:34:13.260 [2024-07-24 17:34:59.482614] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:13.260 [2024-07-24 17:34:59.482625] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:13.260 [2024-07-24 17:34:59.482635] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:13.260 [2024-07-24 17:34:59.482672] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:13.260 [2024-07-24 17:34:59.482685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:13.260 [2024-07-24 17:34:59.482697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:13.260 [2024-07-24 17:34:59.482707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:13.260 [2024-07-24 17:34:59.482717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:13.260 [2024-07-24 17:34:59.482729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.260 [2024-07-24 17:34:59.482740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:13.260 [2024-07-24 17:34:59.482759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.494 ms 00:34:13.260 [2024-07-24 17:34:59.482770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.535 [2024-07-24 17:34:59.499170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.535 [2024-07-24 17:34:59.499232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:13.535 [2024-07-24 17:34:59.499263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.357 ms 00:34:13.535 [2024-07-24 17:34:59.499275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.535 [2024-07-24 17:34:59.499863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.535 [2024-07-24 17:34:59.499892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:13.535 [2024-07-24 17:34:59.499907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:34:13.535 [2024-07-24 17:34:59.499925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.535 [2024-07-24 17:34:59.533624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.535 [2024-07-24 17:34:59.533712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:13.535 [2024-07-24 17:34:59.533729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.535 [2024-07-24 17:34:59.533742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.535 [2024-07-24 17:34:59.533817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.535 [2024-07-24 17:34:59.533837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:13.535 [2024-07-24 17:34:59.533848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.535 [2024-07-24 17:34:59.533863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.535 [2024-07-24 17:34:59.534014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.535 [2024-07-24 17:34:59.534033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:13.535 [2024-07-24 17:34:59.534045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.535 [2024-07-24 17:34:59.534056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.535 [2024-07-24 17:34:59.534077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.535 [2024-07-24 17:34:59.534090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:13.535 [2024-07-24 17:34:59.534101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.535 [2024-07-24 17:34:59.534111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.535 [2024-07-24 17:34:59.630064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.535 [2024-07-24 17:34:59.630120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:13.535 [2024-07-24 17:34:59.630153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.535 [2024-07-24 17:34:59.630163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.535 [2024-07-24 17:34:59.711177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.535 [2024-07-24 17:34:59.711237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:13.535 [2024-07-24 17:34:59.711273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.535 [2024-07-24 17:34:59.711293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.535 [2024-07-24 17:34:59.711418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.535 [2024-07-24 17:34:59.711433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:13.535 [2024-07-24 17:34:59.711475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.535 [2024-07-24 17:34:59.711502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.536 [2024-07-24 17:34:59.711587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.536 [2024-07-24 17:34:59.711603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:13.536 [2024-07-24 17:34:59.711615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.536 [2024-07-24 17:34:59.711626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.536 [2024-07-24 17:34:59.711747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.536 [2024-07-24 17:34:59.711808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:13.536 [2024-07-24 17:34:59.711823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.536 [2024-07-24 17:34:59.711835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.536 [2024-07-24 17:34:59.711885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.536 [2024-07-24 17:34:59.711901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:13.536 [2024-07-24 17:34:59.711913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.536 [2024-07-24 17:34:59.711938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.536 [2024-07-24 17:34:59.712009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.536 [2024-07-24 17:34:59.712030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:13.536 [2024-07-24 17:34:59.712043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.536 [2024-07-24 17:34:59.712055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.536 [2024-07-24 17:34:59.712124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.536 [2024-07-24 17:34:59.712169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:13.536 [2024-07-24 17:34:59.712183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.536 [2024-07-24 17:34:59.712195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.536 [2024-07-24 17:34:59.712405] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 408.011 ms, result 0 00:34:14.485 00:34:14.485 00:34:14.743 17:35:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:34:16.643 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:34:16.643 17:35:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:34:16.643 17:35:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:34:16.643 17:35:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:16.643 17:35:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:16.643 17:35:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:34:16.901 17:35:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:16.901 17:35:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:34:16.901 Process with pid 82334 is not found 00:34:16.901 17:35:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 82334 00:34:16.901 17:35:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 82334 ']' 00:34:16.901 17:35:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 82334 00:34:16.901 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (82334) - No such process 00:34:16.901 17:35:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 82334 is not found' 00:34:16.901 17:35:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:34:17.160 Remove shared memory files 00:34:17.160 17:35:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:34:17.160 17:35:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:17.160 17:35:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:34:17.160 17:35:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:34:17.160 17:35:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:34:17.160 17:35:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:17.160 17:35:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:34:17.160 00:34:17.160 real 4m8.558s 00:34:17.160 user 4m56.040s 00:34:17.160 sys 0m38.722s 00:34:17.160 17:35:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:17.160 17:35:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:17.160 ************************************ 00:34:17.160 END TEST ftl_dirty_shutdown 00:34:17.160 ************************************ 00:34:17.160 17:35:03 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:34:17.160 17:35:03 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:17.160 17:35:03 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:17.160 17:35:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:17.160 ************************************ 00:34:17.160 START TEST ftl_upgrade_shutdown 00:34:17.160 ************************************ 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:34:17.160 * Looking for test storage... 00:34:17.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:34:17.160 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84897 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84897 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 84897 ']' 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:17.161 17:35:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:17.428 [2024-07-24 17:35:03.436537] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:34:17.428 [2024-07-24 17:35:03.436708] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84897 ] 00:34:17.428 [2024-07-24 17:35:03.601858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.685 [2024-07-24 17:35:03.882418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:34:18.619 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:34:18.878 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:34:18.878 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:34:18.878 17:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:34:18.878 17:35:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:34:18.878 17:35:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:34:18.878 17:35:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:34:18.878 17:35:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:34:18.878 17:35:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:34:19.137 17:35:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:34:19.137 { 00:34:19.137 "name": "basen1", 00:34:19.137 "aliases": [ 00:34:19.137 "bcb844f8-f814-4a64-9d8c-f0ae60a5adb6" 00:34:19.137 ], 00:34:19.137 "product_name": "NVMe disk", 00:34:19.137 "block_size": 4096, 00:34:19.137 "num_blocks": 1310720, 00:34:19.137 "uuid": "bcb844f8-f814-4a64-9d8c-f0ae60a5adb6", 00:34:19.137 "assigned_rate_limits": { 00:34:19.137 "rw_ios_per_sec": 0, 00:34:19.137 "rw_mbytes_per_sec": 0, 00:34:19.137 "r_mbytes_per_sec": 0, 00:34:19.137 "w_mbytes_per_sec": 0 00:34:19.137 }, 00:34:19.137 "claimed": true, 00:34:19.137 "claim_type": "read_many_write_one", 00:34:19.137 "zoned": false, 00:34:19.137 "supported_io_types": { 00:34:19.137 "read": true, 00:34:19.137 "write": true, 00:34:19.137 "unmap": true, 00:34:19.137 "flush": true, 00:34:19.137 "reset": true, 00:34:19.137 "nvme_admin": true, 00:34:19.137 "nvme_io": true, 00:34:19.137 "nvme_io_md": false, 00:34:19.137 "write_zeroes": true, 00:34:19.137 "zcopy": false, 00:34:19.137 "get_zone_info": false, 00:34:19.137 "zone_management": false, 00:34:19.137 "zone_append": false, 00:34:19.137 "compare": true, 00:34:19.137 "compare_and_write": false, 00:34:19.137 "abort": true, 00:34:19.137 "seek_hole": false, 00:34:19.137 "seek_data": false, 00:34:19.137 "copy": true, 00:34:19.137 "nvme_iov_md": false 00:34:19.137 }, 00:34:19.137 "driver_specific": { 00:34:19.137 "nvme": [ 00:34:19.137 { 00:34:19.137 "pci_address": "0000:00:11.0", 00:34:19.137 "trid": { 00:34:19.137 "trtype": "PCIe", 00:34:19.137 "traddr": "0000:00:11.0" 00:34:19.137 }, 00:34:19.137 "ctrlr_data": { 00:34:19.137 "cntlid": 0, 00:34:19.137 "vendor_id": "0x1b36", 00:34:19.137 "model_number": "QEMU NVMe Ctrl", 00:34:19.137 "serial_number": "12341", 00:34:19.137 "firmware_revision": "8.0.0", 00:34:19.137 "subnqn": "nqn.2019-08.org.qemu:12341", 00:34:19.137 "oacs": { 00:34:19.137 "security": 0, 00:34:19.137 "format": 1, 00:34:19.137 "firmware": 0, 00:34:19.137 "ns_manage": 1 00:34:19.137 }, 00:34:19.137 "multi_ctrlr": false, 00:34:19.137 "ana_reporting": false 00:34:19.137 }, 00:34:19.137 "vs": { 00:34:19.137 "nvme_version": "1.4" 00:34:19.137 }, 00:34:19.137 "ns_data": { 00:34:19.137 "id": 1, 00:34:19.137 "can_share": false 00:34:19.137 } 00:34:19.137 } 00:34:19.137 ], 00:34:19.137 "mp_policy": "active_passive" 00:34:19.137 } 00:34:19.137 } 00:34:19.137 ]' 00:34:19.137 17:35:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:34:19.137 17:35:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:34:19.137 17:35:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:34:19.137 17:35:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:34:19.137 17:35:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:34:19.137 17:35:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:34:19.137 17:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:34:19.137 17:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:34:19.137 17:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:34:19.137 17:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:19.137 17:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:19.396 17:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=70efb99f-9b74-4e6f-8449-03fe53a45c5e 00:34:19.396 17:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:34:19.396 17:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 70efb99f-9b74-4e6f-8449-03fe53a45c5e 00:34:19.654 17:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:34:19.912 17:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=24a70465-9831-4525-909a-e7881c381e9e 00:34:19.912 17:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 24a70465-9831-4525-909a-e7881c381e9e 00:34:20.170 17:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=5cfec581-4aea-4799-a316-5a6b29c37f10 00:34:20.170 17:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 5cfec581-4aea-4799-a316-5a6b29c37f10 ]] 00:34:20.170 17:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 5cfec581-4aea-4799-a316-5a6b29c37f10 5120 00:34:20.170 17:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:34:20.170 17:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:34:20.170 17:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=5cfec581-4aea-4799-a316-5a6b29c37f10 00:34:20.170 17:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:34:20.170 17:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5cfec581-4aea-4799-a316-5a6b29c37f10 00:34:20.170 17:35:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=5cfec581-4aea-4799-a316-5a6b29c37f10 00:34:20.170 17:35:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:34:20.170 17:35:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:34:20.170 17:35:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:34:20.170 17:35:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5cfec581-4aea-4799-a316-5a6b29c37f10 00:34:20.428 17:35:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:34:20.428 { 00:34:20.428 "name": "5cfec581-4aea-4799-a316-5a6b29c37f10", 00:34:20.428 "aliases": [ 00:34:20.428 "lvs/basen1p0" 00:34:20.428 ], 00:34:20.428 "product_name": "Logical Volume", 00:34:20.428 "block_size": 4096, 00:34:20.428 "num_blocks": 5242880, 00:34:20.428 "uuid": "5cfec581-4aea-4799-a316-5a6b29c37f10", 00:34:20.428 "assigned_rate_limits": { 00:34:20.428 "rw_ios_per_sec": 0, 00:34:20.428 "rw_mbytes_per_sec": 0, 00:34:20.428 "r_mbytes_per_sec": 0, 00:34:20.428 "w_mbytes_per_sec": 0 00:34:20.428 }, 00:34:20.428 "claimed": false, 00:34:20.428 "zoned": false, 00:34:20.428 "supported_io_types": { 00:34:20.428 "read": true, 00:34:20.428 "write": true, 00:34:20.428 "unmap": true, 00:34:20.428 "flush": false, 00:34:20.428 "reset": true, 00:34:20.428 "nvme_admin": false, 00:34:20.428 "nvme_io": false, 00:34:20.428 "nvme_io_md": false, 00:34:20.428 "write_zeroes": true, 00:34:20.428 "zcopy": false, 00:34:20.428 "get_zone_info": false, 00:34:20.428 "zone_management": false, 00:34:20.428 "zone_append": false, 00:34:20.429 "compare": false, 00:34:20.429 "compare_and_write": false, 00:34:20.429 "abort": false, 00:34:20.429 "seek_hole": true, 00:34:20.429 "seek_data": true, 00:34:20.429 "copy": false, 00:34:20.429 "nvme_iov_md": false 00:34:20.429 }, 00:34:20.429 "driver_specific": { 00:34:20.429 "lvol": { 00:34:20.429 "lvol_store_uuid": "24a70465-9831-4525-909a-e7881c381e9e", 00:34:20.429 "base_bdev": "basen1", 00:34:20.429 "thin_provision": true, 00:34:20.429 "num_allocated_clusters": 0, 00:34:20.429 "snapshot": false, 00:34:20.429 "clone": false, 00:34:20.429 "esnap_clone": false 00:34:20.429 } 00:34:20.429 } 00:34:20.429 } 00:34:20.429 ]' 00:34:20.429 17:35:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:34:20.429 17:35:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:34:20.429 17:35:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:34:20.429 17:35:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:34:20.429 17:35:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:34:20.429 17:35:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:34:20.429 17:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:34:20.429 17:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:34:20.429 17:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:34:20.687 17:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:34:20.687 17:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:34:20.687 17:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:34:20.945 17:35:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:34:20.945 17:35:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:34:20.945 17:35:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 5cfec581-4aea-4799-a316-5a6b29c37f10 -c cachen1p0 --l2p_dram_limit 2 00:34:21.204 [2024-07-24 17:35:07.286861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:21.204 [2024-07-24 17:35:07.286954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:34:21.204 [2024-07-24 17:35:07.286975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:34:21.204 [2024-07-24 17:35:07.286989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:21.204 [2024-07-24 17:35:07.287073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:21.204 [2024-07-24 17:35:07.287095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:21.204 [2024-07-24 17:35:07.287108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:34:21.204 [2024-07-24 17:35:07.287121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:21.204 [2024-07-24 17:35:07.287149] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:34:21.204 [2024-07-24 17:35:07.288155] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:34:21.204 [2024-07-24 17:35:07.288203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:21.204 [2024-07-24 17:35:07.288222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:21.204 [2024-07-24 17:35:07.288235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.061 ms 00:34:21.204 [2024-07-24 17:35:07.288248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:21.204 [2024-07-24 17:35:07.288378] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID e451d736-3d29-4ee9-85e9-d186cc06d64f 00:34:21.204 [2024-07-24 17:35:07.290296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:21.204 [2024-07-24 17:35:07.290348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:34:21.204 [2024-07-24 17:35:07.290382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:34:21.204 [2024-07-24 17:35:07.290393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:21.204 [2024-07-24 17:35:07.301120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:21.204 [2024-07-24 17:35:07.301183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:21.204 [2024-07-24 17:35:07.301218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.652 ms 00:34:21.204 [2024-07-24 17:35:07.301230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:21.204 [2024-07-24 17:35:07.301295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:21.204 [2024-07-24 17:35:07.301313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:21.204 [2024-07-24 17:35:07.301328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:34:21.204 [2024-07-24 17:35:07.301340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:21.204 [2024-07-24 17:35:07.301451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:21.204 [2024-07-24 17:35:07.301470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:34:21.204 [2024-07-24 17:35:07.301490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:34:21.204 [2024-07-24 17:35:07.301501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:21.204 [2024-07-24 17:35:07.301539] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:34:21.204 [2024-07-24 17:35:07.306521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:21.204 [2024-07-24 17:35:07.306591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:21.204 [2024-07-24 17:35:07.306605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.996 ms 00:34:21.204 [2024-07-24 17:35:07.306620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:21.204 [2024-07-24 17:35:07.306671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:21.204 [2024-07-24 17:35:07.306703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:34:21.204 [2024-07-24 17:35:07.306716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:34:21.204 [2024-07-24 17:35:07.306729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:21.204 [2024-07-24 17:35:07.306772] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:34:21.204 [2024-07-24 17:35:07.306943] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:34:21.204 [2024-07-24 17:35:07.306961] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:34:21.204 [2024-07-24 17:35:07.306980] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:34:21.204 [2024-07-24 17:35:07.306995] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:34:21.204 [2024-07-24 17:35:07.307011] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:34:21.204 [2024-07-24 17:35:07.307054] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:34:21.204 [2024-07-24 17:35:07.307079] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:34:21.204 [2024-07-24 17:35:07.307091] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:34:21.204 [2024-07-24 17:35:07.307106] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:34:21.204 [2024-07-24 17:35:07.307119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:21.204 [2024-07-24 17:35:07.307132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:34:21.204 [2024-07-24 17:35:07.307144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.349 ms 00:34:21.204 [2024-07-24 17:35:07.307158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:21.204 [2024-07-24 17:35:07.307251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:21.204 [2024-07-24 17:35:07.307269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:34:21.204 [2024-07-24 17:35:07.307281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:34:21.204 [2024-07-24 17:35:07.307297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:21.204 [2024-07-24 17:35:07.307443] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:34:21.204 [2024-07-24 17:35:07.307473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:34:21.204 [2024-07-24 17:35:07.307487] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:21.204 [2024-07-24 17:35:07.307499] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:21.204 [2024-07-24 17:35:07.307511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:34:21.204 [2024-07-24 17:35:07.307523] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:34:21.204 [2024-07-24 17:35:07.307544] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:34:21.204 [2024-07-24 17:35:07.307557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:34:21.204 [2024-07-24 17:35:07.307567] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:34:21.204 [2024-07-24 17:35:07.307579] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:21.204 [2024-07-24 17:35:07.307589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:34:21.204 [2024-07-24 17:35:07.307600] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:34:21.204 [2024-07-24 17:35:07.307610] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:21.204 [2024-07-24 17:35:07.307623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:34:21.204 [2024-07-24 17:35:07.307633] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:34:21.204 [2024-07-24 17:35:07.307663] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:21.204 [2024-07-24 17:35:07.307695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:34:21.204 [2024-07-24 17:35:07.307726] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:34:21.204 [2024-07-24 17:35:07.307738] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:21.205 [2024-07-24 17:35:07.307752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:34:21.205 [2024-07-24 17:35:07.307763] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:34:21.205 [2024-07-24 17:35:07.307775] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:21.205 [2024-07-24 17:35:07.307786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:34:21.205 [2024-07-24 17:35:07.307798] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:34:21.205 [2024-07-24 17:35:07.307809] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:21.205 [2024-07-24 17:35:07.307821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:34:21.205 [2024-07-24 17:35:07.307832] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:34:21.205 [2024-07-24 17:35:07.307861] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:21.205 [2024-07-24 17:35:07.307888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:34:21.205 [2024-07-24 17:35:07.307901] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:34:21.205 [2024-07-24 17:35:07.307912] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:21.205 [2024-07-24 17:35:07.307926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:34:21.205 [2024-07-24 17:35:07.307937] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:34:21.205 [2024-07-24 17:35:07.307952] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:21.205 [2024-07-24 17:35:07.307963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:34:21.205 [2024-07-24 17:35:07.307976] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:34:21.205 [2024-07-24 17:35:07.308002] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:21.205 [2024-07-24 17:35:07.308015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:34:21.205 [2024-07-24 17:35:07.308026] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:34:21.205 [2024-07-24 17:35:07.308041] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:21.205 [2024-07-24 17:35:07.308052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:34:21.205 [2024-07-24 17:35:07.308064] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:34:21.205 [2024-07-24 17:35:07.308075] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:21.205 [2024-07-24 17:35:07.308102] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:34:21.205 [2024-07-24 17:35:07.308115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:34:21.205 [2024-07-24 17:35:07.308127] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:21.205 [2024-07-24 17:35:07.308138] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:21.205 [2024-07-24 17:35:07.308152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:34:21.205 [2024-07-24 17:35:07.308163] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:34:21.205 [2024-07-24 17:35:07.308194] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:34:21.205 [2024-07-24 17:35:07.308204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:34:21.205 [2024-07-24 17:35:07.308217] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:34:21.205 [2024-07-24 17:35:07.308228] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:34:21.205 [2024-07-24 17:35:07.308244] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:34:21.205 [2024-07-24 17:35:07.308261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:21.205 [2024-07-24 17:35:07.308290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:34:21.205 [2024-07-24 17:35:07.308301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:34:21.205 [2024-07-24 17:35:07.308313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:34:21.205 [2024-07-24 17:35:07.308324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:34:21.205 [2024-07-24 17:35:07.308336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:34:21.205 [2024-07-24 17:35:07.308347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:34:21.205 [2024-07-24 17:35:07.308359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:34:21.205 [2024-07-24 17:35:07.308370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:34:21.205 [2024-07-24 17:35:07.308384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:34:21.205 [2024-07-24 17:35:07.308395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:34:21.205 [2024-07-24 17:35:07.308410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:34:21.205 [2024-07-24 17:35:07.308421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:34:21.205 [2024-07-24 17:35:07.308433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:34:21.205 [2024-07-24 17:35:07.308444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:34:21.205 [2024-07-24 17:35:07.308457] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:34:21.205 [2024-07-24 17:35:07.308469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:21.205 [2024-07-24 17:35:07.308486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:21.205 [2024-07-24 17:35:07.308497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:34:21.205 [2024-07-24 17:35:07.308510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:34:21.205 [2024-07-24 17:35:07.308520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:34:21.205 [2024-07-24 17:35:07.308534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:21.205 [2024-07-24 17:35:07.308545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:34:21.205 [2024-07-24 17:35:07.308558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.160 ms 00:34:21.205 [2024-07-24 17:35:07.308569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:21.205 [2024-07-24 17:35:07.308625] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:34:21.205 [2024-07-24 17:35:07.308682] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:34:23.773 [2024-07-24 17:35:09.788188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:23.773 [2024-07-24 17:35:09.788308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:34:23.773 [2024-07-24 17:35:09.788331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2479.572 ms 00:34:23.773 [2024-07-24 17:35:09.788342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:23.773 [2024-07-24 17:35:09.826480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:23.773 [2024-07-24 17:35:09.826553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:23.773 [2024-07-24 17:35:09.826591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.782 ms 00:34:23.773 [2024-07-24 17:35:09.826603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:23.773 [2024-07-24 17:35:09.826739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:23.773 [2024-07-24 17:35:09.826758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:34:23.773 [2024-07-24 17:35:09.826778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:34:23.773 [2024-07-24 17:35:09.826789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:23.773 [2024-07-24 17:35:09.862986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:23.773 [2024-07-24 17:35:09.863070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:23.773 [2024-07-24 17:35:09.863106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.090 ms 00:34:23.773 [2024-07-24 17:35:09.863118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:23.773 [2024-07-24 17:35:09.863168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:23.773 [2024-07-24 17:35:09.863183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:23.773 [2024-07-24 17:35:09.863202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:34:23.773 [2024-07-24 17:35:09.863212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:23.773 [2024-07-24 17:35:09.863923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:23.773 [2024-07-24 17:35:09.863971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:23.773 [2024-07-24 17:35:09.864019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.641 ms 00:34:23.773 [2024-07-24 17:35:09.864031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:23.773 [2024-07-24 17:35:09.864112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:23.773 [2024-07-24 17:35:09.864131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:23.773 [2024-07-24 17:35:09.864145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:34:23.773 [2024-07-24 17:35:09.864156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:23.773 [2024-07-24 17:35:09.883189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:23.773 [2024-07-24 17:35:09.883247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:23.773 [2024-07-24 17:35:09.883284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.003 ms 00:34:23.773 [2024-07-24 17:35:09.883296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:23.773 [2024-07-24 17:35:09.898339] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:34:23.773 [2024-07-24 17:35:09.899896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:23.773 [2024-07-24 17:35:09.899952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:34:23.773 [2024-07-24 17:35:09.899969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.443 ms 00:34:23.773 [2024-07-24 17:35:09.899984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:23.773 [2024-07-24 17:35:09.935788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:23.773 [2024-07-24 17:35:09.935895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:34:23.773 [2024-07-24 17:35:09.935914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.753 ms 00:34:23.773 [2024-07-24 17:35:09.935929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:23.773 [2024-07-24 17:35:09.936055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:23.773 [2024-07-24 17:35:09.936086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:34:23.773 [2024-07-24 17:35:09.936116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.074 ms 00:34:23.773 [2024-07-24 17:35:09.936133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:23.773 [2024-07-24 17:35:09.961658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:23.773 [2024-07-24 17:35:09.961720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:34:23.773 [2024-07-24 17:35:09.961736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.422 ms 00:34:23.773 [2024-07-24 17:35:09.961753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:23.773 [2024-07-24 17:35:09.987085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:23.773 [2024-07-24 17:35:09.987147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:34:23.773 [2024-07-24 17:35:09.987163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.289 ms 00:34:23.773 [2024-07-24 17:35:09.987176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:23.773 [2024-07-24 17:35:09.988123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:23.773 [2024-07-24 17:35:09.988158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:34:23.773 [2024-07-24 17:35:09.988191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.904 ms 00:34:23.773 [2024-07-24 17:35:09.988204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.032 [2024-07-24 17:35:10.070330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.032 [2024-07-24 17:35:10.070429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:34:24.032 [2024-07-24 17:35:10.070450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 82.080 ms 00:34:24.032 [2024-07-24 17:35:10.070468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.032 [2024-07-24 17:35:10.097562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.032 [2024-07-24 17:35:10.097640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:34:24.032 [2024-07-24 17:35:10.097657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.044 ms 00:34:24.032 [2024-07-24 17:35:10.097681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.032 [2024-07-24 17:35:10.123041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.032 [2024-07-24 17:35:10.123123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:34:24.032 [2024-07-24 17:35:10.123150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.301 ms 00:34:24.032 [2024-07-24 17:35:10.123168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.032 [2024-07-24 17:35:10.148566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.032 [2024-07-24 17:35:10.148648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:34:24.032 [2024-07-24 17:35:10.148690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.354 ms 00:34:24.032 [2024-07-24 17:35:10.148705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.032 [2024-07-24 17:35:10.148755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.032 [2024-07-24 17:35:10.148776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:34:24.032 [2024-07-24 17:35:10.148788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:34:24.032 [2024-07-24 17:35:10.148803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.032 [2024-07-24 17:35:10.148947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.032 [2024-07-24 17:35:10.148973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:34:24.032 [2024-07-24 17:35:10.148985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.084 ms 00:34:24.032 [2024-07-24 17:35:10.148999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.032 [2024-07-24 17:35:10.150301] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2862.896 ms, result 0 00:34:24.032 { 00:34:24.032 "name": "ftl", 00:34:24.032 "uuid": "e451d736-3d29-4ee9-85e9-d186cc06d64f" 00:34:24.032 } 00:34:24.032 17:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:34:24.291 [2024-07-24 17:35:10.441350] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:24.291 17:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:34:24.549 17:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:34:24.808 [2024-07-24 17:35:10.969912] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:34:24.808 17:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:34:25.066 [2024-07-24 17:35:11.231778] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:25.067 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:34:25.634 Fill FTL, iteration 1 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=85014 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 85014 /var/tmp/spdk.tgt.sock 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 85014 ']' 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:25.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:25.634 17:35:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:25.635 [2024-07-24 17:35:11.667331] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:34:25.635 [2024-07-24 17:35:11.667525] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85014 ] 00:34:25.635 [2024-07-24 17:35:11.835408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:25.893 [2024-07-24 17:35:12.080186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:26.829 17:35:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:26.829 17:35:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:34:26.829 17:35:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:34:27.087 ftln1 00:34:27.087 17:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:34:27.087 17:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:34:27.360 17:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:34:27.360 17:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 85014 00:34:27.360 17:35:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 85014 ']' 00:34:27.360 17:35:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 85014 00:34:27.360 17:35:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:34:27.360 17:35:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:27.360 17:35:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85014 00:34:27.360 17:35:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:27.360 17:35:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:27.360 killing process with pid 85014 00:34:27.360 17:35:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85014' 00:34:27.360 17:35:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 85014 00:34:27.360 17:35:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 85014 00:34:29.890 17:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:34:29.890 17:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:34:29.890 [2024-07-24 17:35:16.013635] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:34:29.890 [2024-07-24 17:35:16.013813] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85071 ] 00:34:30.149 [2024-07-24 17:35:16.191111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:30.406 [2024-07-24 17:35:16.468925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:36.986  Copying: 213/1024 [MB] (213 MBps) Copying: 427/1024 [MB] (214 MBps) Copying: 639/1024 [MB] (212 MBps) Copying: 851/1024 [MB] (212 MBps) Copying: 1024/1024 [MB] (average 212 MBps) 00:34:36.986 00:34:36.986 Calculate MD5 checksum, iteration 1 00:34:36.986 17:35:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:34:36.986 17:35:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:34:36.986 17:35:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:36.986 17:35:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:36.986 17:35:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:36.986 17:35:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:36.986 17:35:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:36.986 17:35:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:36.986 [2024-07-24 17:35:22.982473] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:34:36.986 [2024-07-24 17:35:22.982753] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85142 ] 00:34:36.986 [2024-07-24 17:35:23.158585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:37.252 [2024-07-24 17:35:23.398851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:40.936  Copying: 478/1024 [MB] (478 MBps) Copying: 924/1024 [MB] (446 MBps) Copying: 1024/1024 [MB] (average 458 MBps) 00:34:40.936 00:34:40.936 17:35:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:34:40.936 17:35:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:42.835 17:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:34:42.835 Fill FTL, iteration 2 00:34:42.835 17:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=48d4a5a2e13d07954262a01a22cb0125 00:34:42.835 17:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:34:42.835 17:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:34:42.835 17:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:34:42.835 17:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:34:42.835 17:35:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:42.835 17:35:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:42.835 17:35:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:42.835 17:35:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:42.835 17:35:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:34:42.835 [2024-07-24 17:35:28.989589] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:34:42.835 [2024-07-24 17:35:28.989802] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85204 ] 00:34:43.104 [2024-07-24 17:35:29.165121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.375 [2024-07-24 17:35:29.418724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:49.999  Copying: 209/1024 [MB] (209 MBps) Copying: 417/1024 [MB] (208 MBps) Copying: 622/1024 [MB] (205 MBps) Copying: 830/1024 [MB] (208 MBps) Copying: 1024/1024 [MB] (average 207 MBps) 00:34:49.999 00:34:49.999 Calculate MD5 checksum, iteration 2 00:34:49.999 17:35:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:34:49.999 17:35:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:34:49.999 17:35:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:49.999 17:35:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:49.999 17:35:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:49.999 17:35:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:49.999 17:35:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:49.999 17:35:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:49.999 [2024-07-24 17:35:36.003155] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:34:49.999 [2024-07-24 17:35:36.003336] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85278 ] 00:34:49.999 [2024-07-24 17:35:36.181252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.257 [2024-07-24 17:35:36.411562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:54.513  Copying: 466/1024 [MB] (466 MBps) Copying: 935/1024 [MB] (469 MBps) Copying: 1024/1024 [MB] (average 467 MBps) 00:34:54.513 00:34:54.513 17:35:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:34:54.513 17:35:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:57.043 17:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:34:57.043 17:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=58e39447f64e72aa0e2f77d82cb71f86 00:34:57.043 17:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:34:57.043 17:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:34:57.043 17:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:34:57.043 [2024-07-24 17:35:43.150317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:57.043 [2024-07-24 17:35:43.150385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:34:57.043 [2024-07-24 17:35:43.150408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:34:57.043 [2024-07-24 17:35:43.150429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:57.043 [2024-07-24 17:35:43.150481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:57.043 [2024-07-24 17:35:43.150498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:34:57.043 [2024-07-24 17:35:43.150519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:34:57.043 [2024-07-24 17:35:43.150531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:57.043 [2024-07-24 17:35:43.150574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:57.043 [2024-07-24 17:35:43.150588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:34:57.043 [2024-07-24 17:35:43.150601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:34:57.043 [2024-07-24 17:35:43.150612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:57.043 [2024-07-24 17:35:43.150714] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.384 ms, result 0 00:34:57.043 true 00:34:57.043 17:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:34:57.300 { 00:34:57.300 "name": "ftl", 00:34:57.300 "properties": [ 00:34:57.300 { 00:34:57.300 "name": "superblock_version", 00:34:57.300 "value": 5, 00:34:57.300 "read-only": true 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "name": "base_device", 00:34:57.300 "bands": [ 00:34:57.300 { 00:34:57.300 "id": 0, 00:34:57.300 "state": "FREE", 00:34:57.300 "validity": 0.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 1, 00:34:57.300 "state": "FREE", 00:34:57.300 "validity": 0.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 2, 00:34:57.300 "state": "FREE", 00:34:57.300 "validity": 0.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 3, 00:34:57.300 "state": "FREE", 00:34:57.300 "validity": 0.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 4, 00:34:57.300 "state": "FREE", 00:34:57.300 "validity": 0.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 5, 00:34:57.300 "state": "FREE", 00:34:57.300 "validity": 0.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 6, 00:34:57.300 "state": "FREE", 00:34:57.300 "validity": 0.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 7, 00:34:57.300 "state": "FREE", 00:34:57.300 "validity": 0.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 8, 00:34:57.300 "state": "FREE", 00:34:57.300 "validity": 0.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 9, 00:34:57.300 "state": "FREE", 00:34:57.300 "validity": 0.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 10, 00:34:57.300 "state": "FREE", 00:34:57.300 "validity": 0.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 11, 00:34:57.300 "state": "FREE", 00:34:57.300 "validity": 0.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 12, 00:34:57.300 "state": "FREE", 00:34:57.300 "validity": 0.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 13, 00:34:57.300 "state": "FREE", 00:34:57.300 "validity": 0.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 14, 00:34:57.300 "state": "FREE", 00:34:57.300 "validity": 0.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 15, 00:34:57.300 "state": "FREE", 00:34:57.300 "validity": 0.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 16, 00:34:57.300 "state": "FREE", 00:34:57.300 "validity": 0.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 17, 00:34:57.300 "state": "FREE", 00:34:57.300 "validity": 0.0 00:34:57.300 } 00:34:57.300 ], 00:34:57.300 "read-only": true 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "name": "cache_device", 00:34:57.300 "type": "bdev", 00:34:57.300 "chunks": [ 00:34:57.300 { 00:34:57.300 "id": 0, 00:34:57.300 "state": "INACTIVE", 00:34:57.300 "utilization": 0.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 1, 00:34:57.300 "state": "CLOSED", 00:34:57.300 "utilization": 1.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 2, 00:34:57.300 "state": "CLOSED", 00:34:57.300 "utilization": 1.0 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 3, 00:34:57.300 "state": "OPEN", 00:34:57.300 "utilization": 0.001953125 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "id": 4, 00:34:57.300 "state": "OPEN", 00:34:57.300 "utilization": 0.0 00:34:57.300 } 00:34:57.300 ], 00:34:57.300 "read-only": true 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "name": "verbose_mode", 00:34:57.300 "value": true, 00:34:57.300 "unit": "", 00:34:57.300 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:34:57.300 }, 00:34:57.300 { 00:34:57.300 "name": "prep_upgrade_on_shutdown", 00:34:57.300 "value": false, 00:34:57.300 "unit": "", 00:34:57.300 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:34:57.300 } 00:34:57.300 ] 00:34:57.300 } 00:34:57.300 17:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:34:57.557 [2024-07-24 17:35:43.743184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:57.557 [2024-07-24 17:35:43.743248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:34:57.557 [2024-07-24 17:35:43.743269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:34:57.557 [2024-07-24 17:35:43.743282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:57.557 [2024-07-24 17:35:43.743318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:57.557 [2024-07-24 17:35:43.743334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:34:57.558 [2024-07-24 17:35:43.743347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:34:57.558 [2024-07-24 17:35:43.743357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:57.558 [2024-07-24 17:35:43.743385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:57.558 [2024-07-24 17:35:43.743428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:34:57.558 [2024-07-24 17:35:43.743454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:34:57.558 [2024-07-24 17:35:43.743479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:57.558 [2024-07-24 17:35:43.743633] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.371 ms, result 0 00:34:57.558 true 00:34:57.558 17:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:34:57.558 17:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:34:57.558 17:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:34:58.124 17:35:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:34:58.124 17:35:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:34:58.124 17:35:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:34:58.124 [2024-07-24 17:35:44.348129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:58.124 [2024-07-24 17:35:44.348195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:34:58.124 [2024-07-24 17:35:44.348217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:34:58.124 [2024-07-24 17:35:44.348229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:58.124 [2024-07-24 17:35:44.348264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:58.124 [2024-07-24 17:35:44.348281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:34:58.124 [2024-07-24 17:35:44.348293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:34:58.124 [2024-07-24 17:35:44.348304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:58.124 [2024-07-24 17:35:44.348331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:58.124 [2024-07-24 17:35:44.348345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:34:58.124 [2024-07-24 17:35:44.348357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:34:58.124 [2024-07-24 17:35:44.348369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:58.124 [2024-07-24 17:35:44.348446] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.302 ms, result 0 00:34:58.124 true 00:34:58.382 17:35:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:34:58.653 { 00:34:58.653 "name": "ftl", 00:34:58.653 "properties": [ 00:34:58.653 { 00:34:58.653 "name": "superblock_version", 00:34:58.653 "value": 5, 00:34:58.653 "read-only": true 00:34:58.653 }, 00:34:58.653 { 00:34:58.653 "name": "base_device", 00:34:58.653 "bands": [ 00:34:58.653 { 00:34:58.653 "id": 0, 00:34:58.653 "state": "FREE", 00:34:58.653 "validity": 0.0 00:34:58.653 }, 00:34:58.653 { 00:34:58.653 "id": 1, 00:34:58.653 "state": "FREE", 00:34:58.653 "validity": 0.0 00:34:58.653 }, 00:34:58.653 { 00:34:58.653 "id": 2, 00:34:58.653 "state": "FREE", 00:34:58.653 "validity": 0.0 00:34:58.653 }, 00:34:58.653 { 00:34:58.653 "id": 3, 00:34:58.653 "state": "FREE", 00:34:58.653 "validity": 0.0 00:34:58.653 }, 00:34:58.653 { 00:34:58.653 "id": 4, 00:34:58.653 "state": "FREE", 00:34:58.653 "validity": 0.0 00:34:58.653 }, 00:34:58.653 { 00:34:58.653 "id": 5, 00:34:58.653 "state": "FREE", 00:34:58.653 "validity": 0.0 00:34:58.653 }, 00:34:58.653 { 00:34:58.653 "id": 6, 00:34:58.653 "state": "FREE", 00:34:58.653 "validity": 0.0 00:34:58.653 }, 00:34:58.653 { 00:34:58.653 "id": 7, 00:34:58.653 "state": "FREE", 00:34:58.653 "validity": 0.0 00:34:58.653 }, 00:34:58.653 { 00:34:58.653 "id": 8, 00:34:58.653 "state": "FREE", 00:34:58.653 "validity": 0.0 00:34:58.653 }, 00:34:58.653 { 00:34:58.653 "id": 9, 00:34:58.653 "state": "FREE", 00:34:58.653 "validity": 0.0 00:34:58.653 }, 00:34:58.653 { 00:34:58.653 "id": 10, 00:34:58.653 "state": "FREE", 00:34:58.654 "validity": 0.0 00:34:58.654 }, 00:34:58.654 { 00:34:58.654 "id": 11, 00:34:58.654 "state": "FREE", 00:34:58.654 "validity": 0.0 00:34:58.654 }, 00:34:58.654 { 00:34:58.654 "id": 12, 00:34:58.654 "state": "FREE", 00:34:58.654 "validity": 0.0 00:34:58.654 }, 00:34:58.654 { 00:34:58.654 "id": 13, 00:34:58.654 "state": "FREE", 00:34:58.654 "validity": 0.0 00:34:58.654 }, 00:34:58.654 { 00:34:58.654 "id": 14, 00:34:58.654 "state": "FREE", 00:34:58.654 "validity": 0.0 00:34:58.654 }, 00:34:58.654 { 00:34:58.654 "id": 15, 00:34:58.654 "state": "FREE", 00:34:58.654 "validity": 0.0 00:34:58.654 }, 00:34:58.654 { 00:34:58.654 "id": 16, 00:34:58.654 "state": "FREE", 00:34:58.654 "validity": 0.0 00:34:58.654 }, 00:34:58.654 { 00:34:58.654 "id": 17, 00:34:58.654 "state": "FREE", 00:34:58.654 "validity": 0.0 00:34:58.654 } 00:34:58.654 ], 00:34:58.654 "read-only": true 00:34:58.654 }, 00:34:58.654 { 00:34:58.654 "name": "cache_device", 00:34:58.654 "type": "bdev", 00:34:58.654 "chunks": [ 00:34:58.654 { 00:34:58.654 "id": 0, 00:34:58.654 "state": "INACTIVE", 00:34:58.654 "utilization": 0.0 00:34:58.654 }, 00:34:58.654 { 00:34:58.654 "id": 1, 00:34:58.654 "state": "CLOSED", 00:34:58.654 "utilization": 1.0 00:34:58.654 }, 00:34:58.654 { 00:34:58.654 "id": 2, 00:34:58.654 "state": "CLOSED", 00:34:58.654 "utilization": 1.0 00:34:58.654 }, 00:34:58.654 { 00:34:58.654 "id": 3, 00:34:58.654 "state": "OPEN", 00:34:58.654 "utilization": 0.001953125 00:34:58.654 }, 00:34:58.654 { 00:34:58.654 "id": 4, 00:34:58.654 "state": "OPEN", 00:34:58.654 "utilization": 0.0 00:34:58.654 } 00:34:58.654 ], 00:34:58.654 "read-only": true 00:34:58.654 }, 00:34:58.654 { 00:34:58.654 "name": "verbose_mode", 00:34:58.654 "value": true, 00:34:58.654 "unit": "", 00:34:58.654 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:34:58.654 }, 00:34:58.654 { 00:34:58.654 "name": "prep_upgrade_on_shutdown", 00:34:58.654 "value": true, 00:34:58.654 "unit": "", 00:34:58.654 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:34:58.654 } 00:34:58.654 ] 00:34:58.654 } 00:34:58.654 17:35:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:34:58.654 17:35:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84897 ]] 00:34:58.654 17:35:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84897 00:34:58.654 17:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 84897 ']' 00:34:58.654 17:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 84897 00:34:58.654 17:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:34:58.654 17:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:58.654 17:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84897 00:34:58.654 killing process with pid 84897 00:34:58.654 17:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:58.654 17:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:58.654 17:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84897' 00:34:58.654 17:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 84897 00:34:58.654 17:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 84897 00:34:59.603 [2024-07-24 17:35:45.774569] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:34:59.603 [2024-07-24 17:35:45.793234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:59.603 [2024-07-24 17:35:45.793314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:34:59.603 [2024-07-24 17:35:45.793351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:34:59.603 [2024-07-24 17:35:45.793364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:59.603 [2024-07-24 17:35:45.793397] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:34:59.603 [2024-07-24 17:35:45.797021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:59.603 [2024-07-24 17:35:45.797072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:34:59.603 [2024-07-24 17:35:45.797118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.602 ms 00:34:59.603 [2024-07-24 17:35:45.797130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.571 [2024-07-24 17:35:54.723428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:09.571 [2024-07-24 17:35:54.723551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:35:09.571 [2024-07-24 17:35:54.723573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8926.306 ms 00:35:09.571 [2024-07-24 17:35:54.723586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.571 [2024-07-24 17:35:54.828984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:09.571 [2024-07-24 17:35:54.829046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:35:09.571 [2024-07-24 17:35:54.829068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 105.370 ms 00:35:09.571 [2024-07-24 17:35:54.829081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.571 [2024-07-24 17:35:54.830367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:09.571 [2024-07-24 17:35:54.830416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:35:09.571 [2024-07-24 17:35:54.830456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.209 ms 00:35:09.571 [2024-07-24 17:35:54.830468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.571 [2024-07-24 17:35:54.844987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:09.571 [2024-07-24 17:35:54.845064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:35:09.571 [2024-07-24 17:35:54.845111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.446 ms 00:35:09.571 [2024-07-24 17:35:54.845136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.571 [2024-07-24 17:35:54.853410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:09.571 [2024-07-24 17:35:54.853457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:35:09.571 [2024-07-24 17:35:54.853489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.234 ms 00:35:09.571 [2024-07-24 17:35:54.853501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.571 [2024-07-24 17:35:54.853627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:09.571 [2024-07-24 17:35:54.853647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:35:09.571 [2024-07-24 17:35:54.853706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:35:09.571 [2024-07-24 17:35:54.853720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.571 [2024-07-24 17:35:54.866193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:09.571 [2024-07-24 17:35:54.866234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:35:09.571 [2024-07-24 17:35:54.866265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.449 ms 00:35:09.571 [2024-07-24 17:35:54.866276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.571 [2024-07-24 17:35:54.878394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:09.571 [2024-07-24 17:35:54.878432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:35:09.571 [2024-07-24 17:35:54.878462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.078 ms 00:35:09.571 [2024-07-24 17:35:54.878471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.571 [2024-07-24 17:35:54.890762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:09.571 [2024-07-24 17:35:54.890801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:35:09.572 [2024-07-24 17:35:54.890831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.253 ms 00:35:09.572 [2024-07-24 17:35:54.890841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.572 [2024-07-24 17:35:54.902008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:09.572 [2024-07-24 17:35:54.902046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:35:09.572 [2024-07-24 17:35:54.902092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.090 ms 00:35:09.572 [2024-07-24 17:35:54.902101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.572 [2024-07-24 17:35:54.902138] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:35:09.572 [2024-07-24 17:35:54.902159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:35:09.572 [2024-07-24 17:35:54.902173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:35:09.572 [2024-07-24 17:35:54.902184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:35:09.572 [2024-07-24 17:35:54.902195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:09.572 [2024-07-24 17:35:54.902206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:09.572 [2024-07-24 17:35:54.902217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:09.572 [2024-07-24 17:35:54.902227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:09.572 [2024-07-24 17:35:54.902238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:09.572 [2024-07-24 17:35:54.902248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:09.572 [2024-07-24 17:35:54.902259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:09.572 [2024-07-24 17:35:54.902270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:09.572 [2024-07-24 17:35:54.902280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:09.572 [2024-07-24 17:35:54.902291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:09.572 [2024-07-24 17:35:54.902316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:09.572 [2024-07-24 17:35:54.902327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:09.572 [2024-07-24 17:35:54.902338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:09.572 [2024-07-24 17:35:54.902348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:09.572 [2024-07-24 17:35:54.902358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:09.572 [2024-07-24 17:35:54.902372] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:35:09.572 [2024-07-24 17:35:54.902382] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: e451d736-3d29-4ee9-85e9-d186cc06d64f 00:35:09.572 [2024-07-24 17:35:54.902393] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:35:09.572 [2024-07-24 17:35:54.902403] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:35:09.572 [2024-07-24 17:35:54.902418] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:35:09.572 [2024-07-24 17:35:54.902429] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:35:09.572 [2024-07-24 17:35:54.902439] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:35:09.572 [2024-07-24 17:35:54.902449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:35:09.572 [2024-07-24 17:35:54.902459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:35:09.572 [2024-07-24 17:35:54.902468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:35:09.572 [2024-07-24 17:35:54.902478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:35:09.572 [2024-07-24 17:35:54.902488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:09.572 [2024-07-24 17:35:54.902498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:35:09.572 [2024-07-24 17:35:54.902510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.352 ms 00:35:09.572 [2024-07-24 17:35:54.902520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.572 [2024-07-24 17:35:54.917599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:09.572 [2024-07-24 17:35:54.917643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:35:09.572 [2024-07-24 17:35:54.917705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.055 ms 00:35:09.572 [2024-07-24 17:35:54.917722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.572 [2024-07-24 17:35:54.918208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:09.572 [2024-07-24 17:35:54.918235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:35:09.572 [2024-07-24 17:35:54.918250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.455 ms 00:35:09.572 [2024-07-24 17:35:54.918261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.572 [2024-07-24 17:35:54.965734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:09.572 [2024-07-24 17:35:54.965810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:35:09.572 [2024-07-24 17:35:54.965851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:09.572 [2024-07-24 17:35:54.965862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.572 [2024-07-24 17:35:54.965921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:09.572 [2024-07-24 17:35:54.965939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:35:09.572 [2024-07-24 17:35:54.965951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:09.572 [2024-07-24 17:35:54.965968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.572 [2024-07-24 17:35:54.966104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:09.572 [2024-07-24 17:35:54.966128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:35:09.572 [2024-07-24 17:35:54.966139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:09.572 [2024-07-24 17:35:54.966155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.572 [2024-07-24 17:35:54.966177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:09.572 [2024-07-24 17:35:54.966190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:35:09.572 [2024-07-24 17:35:54.966201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:09.572 [2024-07-24 17:35:54.966211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.572 [2024-07-24 17:35:55.066223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:09.572 [2024-07-24 17:35:55.066303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:35:09.572 [2024-07-24 17:35:55.066339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:09.572 [2024-07-24 17:35:55.066351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.572 [2024-07-24 17:35:55.146609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:09.572 [2024-07-24 17:35:55.146737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:35:09.572 [2024-07-24 17:35:55.146772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:09.572 [2024-07-24 17:35:55.146783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.572 [2024-07-24 17:35:55.146917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:09.572 [2024-07-24 17:35:55.146936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:35:09.572 [2024-07-24 17:35:55.146955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:09.572 [2024-07-24 17:35:55.146966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.572 [2024-07-24 17:35:55.147058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:09.572 [2024-07-24 17:35:55.147092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:35:09.572 [2024-07-24 17:35:55.147105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:09.572 [2024-07-24 17:35:55.147116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.572 [2024-07-24 17:35:55.147238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:09.572 [2024-07-24 17:35:55.147256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:35:09.572 [2024-07-24 17:35:55.147268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:09.572 [2024-07-24 17:35:55.147286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.572 [2024-07-24 17:35:55.147352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:09.572 [2024-07-24 17:35:55.147368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:35:09.572 [2024-07-24 17:35:55.147381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:09.572 [2024-07-24 17:35:55.147392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.572 [2024-07-24 17:35:55.147439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:09.572 [2024-07-24 17:35:55.147454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:35:09.572 [2024-07-24 17:35:55.147465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:09.572 [2024-07-24 17:35:55.147482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.572 [2024-07-24 17:35:55.147551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:09.572 [2024-07-24 17:35:55.147583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:35:09.572 [2024-07-24 17:35:55.147595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:09.572 [2024-07-24 17:35:55.147606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:09.572 [2024-07-24 17:35:55.147811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9354.560 ms, result 0 00:35:12.856 17:35:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:35:12.856 17:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:35:12.856 17:35:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:35:12.856 17:35:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:35:12.856 17:35:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:12.856 17:35:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85501 00:35:12.856 17:35:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:35:12.856 17:35:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:12.856 17:35:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85501 00:35:12.856 17:35:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 85501 ']' 00:35:12.856 17:35:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:12.856 17:35:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:12.856 17:35:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:12.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:12.856 17:35:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:12.856 17:35:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:12.856 [2024-07-24 17:35:58.554525] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:35:12.856 [2024-07-24 17:35:58.554746] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85501 ] 00:35:12.856 [2024-07-24 17:35:58.732087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.856 [2024-07-24 17:35:58.969504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:13.790 [2024-07-24 17:35:59.815381] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:35:13.790 [2024-07-24 17:35:59.815473] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:35:13.790 [2024-07-24 17:35:59.962130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:13.790 [2024-07-24 17:35:59.962179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:35:13.790 [2024-07-24 17:35:59.962214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:35:13.790 [2024-07-24 17:35:59.962225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:13.790 [2024-07-24 17:35:59.962301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:13.790 [2024-07-24 17:35:59.962318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:35:13.790 [2024-07-24 17:35:59.962330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:35:13.790 [2024-07-24 17:35:59.962341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:13.790 [2024-07-24 17:35:59.962374] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:35:13.790 [2024-07-24 17:35:59.963302] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:35:13.790 [2024-07-24 17:35:59.963359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:13.790 [2024-07-24 17:35:59.963372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:35:13.790 [2024-07-24 17:35:59.963384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.991 ms 00:35:13.790 [2024-07-24 17:35:59.963400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:13.790 [2024-07-24 17:35:59.965490] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:35:13.790 [2024-07-24 17:35:59.981053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:13.790 [2024-07-24 17:35:59.981109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:35:13.790 [2024-07-24 17:35:59.981143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.564 ms 00:35:13.790 [2024-07-24 17:35:59.981170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:13.790 [2024-07-24 17:35:59.981307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:13.790 [2024-07-24 17:35:59.981327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:35:13.790 [2024-07-24 17:35:59.981341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:35:13.790 [2024-07-24 17:35:59.981352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:13.790 [2024-07-24 17:35:59.990513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:13.790 [2024-07-24 17:35:59.990558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:35:13.790 [2024-07-24 17:35:59.990574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.052 ms 00:35:13.790 [2024-07-24 17:35:59.990586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:13.790 [2024-07-24 17:35:59.990694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:13.790 [2024-07-24 17:35:59.990717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:35:13.790 [2024-07-24 17:35:59.990735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:35:13.790 [2024-07-24 17:35:59.990747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:13.790 [2024-07-24 17:35:59.990817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:13.791 [2024-07-24 17:35:59.990835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:35:13.791 [2024-07-24 17:35:59.990849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:35:13.791 [2024-07-24 17:35:59.990860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:13.791 [2024-07-24 17:35:59.990899] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:35:13.791 [2024-07-24 17:35:59.995895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:13.791 [2024-07-24 17:35:59.995936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:35:13.791 [2024-07-24 17:35:59.995953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.006 ms 00:35:13.791 [2024-07-24 17:35:59.995966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:13.791 [2024-07-24 17:35:59.996008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:13.791 [2024-07-24 17:35:59.996023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:35:13.791 [2024-07-24 17:35:59.996040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:35:13.791 [2024-07-24 17:35:59.996051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:13.791 [2024-07-24 17:35:59.996129] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:35:13.791 [2024-07-24 17:35:59.996164] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:35:13.791 [2024-07-24 17:35:59.996209] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:35:13.791 [2024-07-24 17:35:59.996231] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:35:13.791 [2024-07-24 17:35:59.996355] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:35:13.791 [2024-07-24 17:35:59.996386] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:35:13.791 [2024-07-24 17:35:59.996402] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:35:13.791 [2024-07-24 17:35:59.996418] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:35:13.791 [2024-07-24 17:35:59.996432] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:35:13.791 [2024-07-24 17:35:59.996444] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:35:13.791 [2024-07-24 17:35:59.996455] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:35:13.791 [2024-07-24 17:35:59.996466] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:35:13.791 [2024-07-24 17:35:59.996477] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:35:13.791 [2024-07-24 17:35:59.996490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:13.791 [2024-07-24 17:35:59.996501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:35:13.791 [2024-07-24 17:35:59.996513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.364 ms 00:35:13.791 [2024-07-24 17:35:59.996528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:13.791 [2024-07-24 17:35:59.996630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:13.791 [2024-07-24 17:35:59.996667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:35:13.791 [2024-07-24 17:35:59.996680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:35:13.791 [2024-07-24 17:35:59.996691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:13.791 [2024-07-24 17:35:59.996806] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:35:13.791 [2024-07-24 17:35:59.996823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:35:13.791 [2024-07-24 17:35:59.996841] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:13.791 [2024-07-24 17:35:59.996853] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:13.791 [2024-07-24 17:35:59.996870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:35:13.791 [2024-07-24 17:35:59.996881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:35:13.791 [2024-07-24 17:35:59.996894] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:35:13.791 [2024-07-24 17:35:59.996904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:35:13.791 [2024-07-24 17:35:59.996915] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:35:13.791 [2024-07-24 17:35:59.996925] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:13.791 [2024-07-24 17:35:59.996935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:35:13.791 [2024-07-24 17:35:59.996945] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:35:13.791 [2024-07-24 17:35:59.996955] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:13.791 [2024-07-24 17:35:59.996965] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:35:13.791 [2024-07-24 17:35:59.996975] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:35:13.791 [2024-07-24 17:35:59.996985] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:13.791 [2024-07-24 17:35:59.996996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:35:13.791 [2024-07-24 17:35:59.997006] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:35:13.791 [2024-07-24 17:35:59.997016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:13.791 [2024-07-24 17:35:59.997026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:35:13.791 [2024-07-24 17:35:59.997037] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:35:13.791 [2024-07-24 17:35:59.997047] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:13.791 [2024-07-24 17:35:59.997057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:35:13.791 [2024-07-24 17:35:59.997068] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:35:13.791 [2024-07-24 17:35:59.997078] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:13.791 [2024-07-24 17:35:59.997088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:35:13.791 [2024-07-24 17:35:59.997099] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:35:13.791 [2024-07-24 17:35:59.997109] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:13.791 [2024-07-24 17:35:59.997120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:35:13.791 [2024-07-24 17:35:59.997130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:35:13.791 [2024-07-24 17:35:59.997140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:13.791 [2024-07-24 17:35:59.997151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:35:13.791 [2024-07-24 17:35:59.997161] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:35:13.791 [2024-07-24 17:35:59.997171] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:13.791 [2024-07-24 17:35:59.997182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:35:13.791 [2024-07-24 17:35:59.997192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:35:13.791 [2024-07-24 17:35:59.997202] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:13.791 [2024-07-24 17:35:59.997213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:35:13.791 [2024-07-24 17:35:59.997225] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:35:13.791 [2024-07-24 17:35:59.997236] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:13.791 [2024-07-24 17:35:59.997246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:35:13.791 [2024-07-24 17:35:59.997257] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:35:13.791 [2024-07-24 17:35:59.997267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:13.791 [2024-07-24 17:35:59.997277] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:35:13.791 [2024-07-24 17:35:59.997288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:35:13.791 [2024-07-24 17:35:59.997299] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:13.791 [2024-07-24 17:35:59.997310] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:13.791 [2024-07-24 17:35:59.997322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:35:13.791 [2024-07-24 17:35:59.997333] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:35:13.791 [2024-07-24 17:35:59.997343] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:35:13.791 [2024-07-24 17:35:59.997353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:35:13.791 [2024-07-24 17:35:59.997376] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:35:13.791 [2024-07-24 17:35:59.997387] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:35:13.791 [2024-07-24 17:35:59.997399] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:35:13.791 [2024-07-24 17:35:59.997419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:13.791 [2024-07-24 17:35:59.997432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:35:13.791 [2024-07-24 17:35:59.997443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:35:13.791 [2024-07-24 17:35:59.997455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:35:13.791 [2024-07-24 17:35:59.997466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:35:13.791 [2024-07-24 17:35:59.997477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:35:13.791 [2024-07-24 17:35:59.997488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:35:13.791 [2024-07-24 17:35:59.997500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:35:13.791 [2024-07-24 17:35:59.997511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:35:13.791 [2024-07-24 17:35:59.997522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:35:13.792 [2024-07-24 17:35:59.997533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:35:13.792 [2024-07-24 17:35:59.997546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:35:13.792 [2024-07-24 17:35:59.997557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:35:13.792 [2024-07-24 17:35:59.997568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:35:13.792 [2024-07-24 17:35:59.997580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:35:13.792 [2024-07-24 17:35:59.997591] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:35:13.792 [2024-07-24 17:35:59.997605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:13.792 [2024-07-24 17:35:59.997618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:13.792 [2024-07-24 17:35:59.997630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:35:13.792 [2024-07-24 17:35:59.997656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:35:13.792 [2024-07-24 17:35:59.997671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:35:13.792 [2024-07-24 17:35:59.997683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:13.792 [2024-07-24 17:35:59.997695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:35:13.792 [2024-07-24 17:35:59.997707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.942 ms 00:35:13.792 [2024-07-24 17:35:59.997723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:13.792 [2024-07-24 17:35:59.997789] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:35:13.792 [2024-07-24 17:35:59.997807] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:35:16.323 [2024-07-24 17:36:02.464972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.323 [2024-07-24 17:36:02.465046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:35:16.323 [2024-07-24 17:36:02.465066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2467.194 ms 00:35:16.323 [2024-07-24 17:36:02.465090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.323 [2024-07-24 17:36:02.506349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.323 [2024-07-24 17:36:02.506413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:35:16.323 [2024-07-24 17:36:02.506433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.929 ms 00:35:16.323 [2024-07-24 17:36:02.506444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.323 [2024-07-24 17:36:02.506667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.323 [2024-07-24 17:36:02.506702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:35:16.323 [2024-07-24 17:36:02.506716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:35:16.323 [2024-07-24 17:36:02.506728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.323 [2024-07-24 17:36:02.552453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.323 [2024-07-24 17:36:02.552509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:35:16.323 [2024-07-24 17:36:02.552555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.633 ms 00:35:16.323 [2024-07-24 17:36:02.552567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.323 [2024-07-24 17:36:02.552636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.323 [2024-07-24 17:36:02.552651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:35:16.324 [2024-07-24 17:36:02.552665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:35:16.324 [2024-07-24 17:36:02.552713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.324 [2024-07-24 17:36:02.553358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.324 [2024-07-24 17:36:02.553383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:35:16.324 [2024-07-24 17:36:02.553397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.547 ms 00:35:16.324 [2024-07-24 17:36:02.553408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.324 [2024-07-24 17:36:02.553471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.324 [2024-07-24 17:36:02.553487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:35:16.324 [2024-07-24 17:36:02.553500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:35:16.324 [2024-07-24 17:36:02.553511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.583 [2024-07-24 17:36:02.575284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.583 [2024-07-24 17:36:02.575333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:35:16.583 [2024-07-24 17:36:02.575352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.743 ms 00:35:16.583 [2024-07-24 17:36:02.575375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.583 [2024-07-24 17:36:02.593500] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:35:16.583 [2024-07-24 17:36:02.593547] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:35:16.583 [2024-07-24 17:36:02.593567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.583 [2024-07-24 17:36:02.593579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:35:16.583 [2024-07-24 17:36:02.593593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.019 ms 00:35:16.583 [2024-07-24 17:36:02.593604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.583 [2024-07-24 17:36:02.611686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.583 [2024-07-24 17:36:02.611747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:35:16.583 [2024-07-24 17:36:02.611779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.015 ms 00:35:16.583 [2024-07-24 17:36:02.611806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.583 [2024-07-24 17:36:02.627419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.583 [2024-07-24 17:36:02.627492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:35:16.583 [2024-07-24 17:36:02.627511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.548 ms 00:35:16.583 [2024-07-24 17:36:02.627522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.583 [2024-07-24 17:36:02.642830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.583 [2024-07-24 17:36:02.642886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:35:16.583 [2024-07-24 17:36:02.642936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.248 ms 00:35:16.583 [2024-07-24 17:36:02.642947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.583 [2024-07-24 17:36:02.643949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.583 [2024-07-24 17:36:02.643977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:35:16.583 [2024-07-24 17:36:02.643997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.852 ms 00:35:16.583 [2024-07-24 17:36:02.644008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.583 [2024-07-24 17:36:02.739587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.583 [2024-07-24 17:36:02.739670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:35:16.583 [2024-07-24 17:36:02.739694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 95.541 ms 00:35:16.583 [2024-07-24 17:36:02.739707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.583 [2024-07-24 17:36:02.752949] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:35:16.583 [2024-07-24 17:36:02.754072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.583 [2024-07-24 17:36:02.754116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:35:16.583 [2024-07-24 17:36:02.754153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.283 ms 00:35:16.583 [2024-07-24 17:36:02.754164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.583 [2024-07-24 17:36:02.754276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.583 [2024-07-24 17:36:02.754294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:35:16.583 [2024-07-24 17:36:02.754307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:35:16.583 [2024-07-24 17:36:02.754318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.583 [2024-07-24 17:36:02.754394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.583 [2024-07-24 17:36:02.754411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:35:16.583 [2024-07-24 17:36:02.754439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:35:16.583 [2024-07-24 17:36:02.754454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.583 [2024-07-24 17:36:02.754488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.583 [2024-07-24 17:36:02.754534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:35:16.583 [2024-07-24 17:36:02.754546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:35:16.583 [2024-07-24 17:36:02.754556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.583 [2024-07-24 17:36:02.754598] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:35:16.583 [2024-07-24 17:36:02.754614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.583 [2024-07-24 17:36:02.754640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:35:16.583 [2024-07-24 17:36:02.754668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:35:16.583 [2024-07-24 17:36:02.754734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.583 [2024-07-24 17:36:02.786883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.583 [2024-07-24 17:36:02.786930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:35:16.583 [2024-07-24 17:36:02.786949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.097 ms 00:35:16.583 [2024-07-24 17:36:02.786961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.583 [2024-07-24 17:36:02.787079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.583 [2024-07-24 17:36:02.787099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:35:16.583 [2024-07-24 17:36:02.787113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:35:16.583 [2024-07-24 17:36:02.787132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.583 [2024-07-24 17:36:02.788844] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2826.161 ms, result 0 00:35:16.583 [2024-07-24 17:36:02.803349] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:16.583 [2024-07-24 17:36:02.819350] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:35:16.842 [2024-07-24 17:36:02.828952] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:16.842 17:36:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:16.842 17:36:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:35:16.842 17:36:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:16.842 17:36:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:35:16.842 17:36:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:35:17.407 [2024-07-24 17:36:03.345469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:17.407 [2024-07-24 17:36:03.345825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:17.407 [2024-07-24 17:36:03.345856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:35:17.407 [2024-07-24 17:36:03.345872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:17.407 [2024-07-24 17:36:03.345923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:17.407 [2024-07-24 17:36:03.345939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:17.407 [2024-07-24 17:36:03.345953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:35:17.407 [2024-07-24 17:36:03.345965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:17.407 [2024-07-24 17:36:03.345993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:17.407 [2024-07-24 17:36:03.346007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:17.407 [2024-07-24 17:36:03.346031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:17.407 [2024-07-24 17:36:03.346049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:17.407 [2024-07-24 17:36:03.346127] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.650 ms, result 0 00:35:17.407 true 00:35:17.407 17:36:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:17.407 { 00:35:17.407 "name": "ftl", 00:35:17.407 "properties": [ 00:35:17.407 { 00:35:17.407 "name": "superblock_version", 00:35:17.407 "value": 5, 00:35:17.407 "read-only": true 00:35:17.407 }, 00:35:17.407 { 00:35:17.407 "name": "base_device", 00:35:17.407 "bands": [ 00:35:17.407 { 00:35:17.407 "id": 0, 00:35:17.407 "state": "CLOSED", 00:35:17.407 "validity": 1.0 00:35:17.407 }, 00:35:17.407 { 00:35:17.407 "id": 1, 00:35:17.407 "state": "CLOSED", 00:35:17.407 "validity": 1.0 00:35:17.407 }, 00:35:17.407 { 00:35:17.407 "id": 2, 00:35:17.407 "state": "CLOSED", 00:35:17.407 "validity": 0.007843137254901933 00:35:17.407 }, 00:35:17.407 { 00:35:17.407 "id": 3, 00:35:17.407 "state": "FREE", 00:35:17.407 "validity": 0.0 00:35:17.407 }, 00:35:17.407 { 00:35:17.407 "id": 4, 00:35:17.407 "state": "FREE", 00:35:17.407 "validity": 0.0 00:35:17.407 }, 00:35:17.407 { 00:35:17.407 "id": 5, 00:35:17.407 "state": "FREE", 00:35:17.407 "validity": 0.0 00:35:17.407 }, 00:35:17.407 { 00:35:17.407 "id": 6, 00:35:17.407 "state": "FREE", 00:35:17.407 "validity": 0.0 00:35:17.407 }, 00:35:17.407 { 00:35:17.407 "id": 7, 00:35:17.407 "state": "FREE", 00:35:17.407 "validity": 0.0 00:35:17.407 }, 00:35:17.407 { 00:35:17.407 "id": 8, 00:35:17.407 "state": "FREE", 00:35:17.407 "validity": 0.0 00:35:17.407 }, 00:35:17.407 { 00:35:17.407 "id": 9, 00:35:17.407 "state": "FREE", 00:35:17.407 "validity": 0.0 00:35:17.407 }, 00:35:17.407 { 00:35:17.407 "id": 10, 00:35:17.408 "state": "FREE", 00:35:17.408 "validity": 0.0 00:35:17.408 }, 00:35:17.408 { 00:35:17.408 "id": 11, 00:35:17.408 "state": "FREE", 00:35:17.408 "validity": 0.0 00:35:17.408 }, 00:35:17.408 { 00:35:17.408 "id": 12, 00:35:17.408 "state": "FREE", 00:35:17.408 "validity": 0.0 00:35:17.408 }, 00:35:17.408 { 00:35:17.408 "id": 13, 00:35:17.408 "state": "FREE", 00:35:17.408 "validity": 0.0 00:35:17.408 }, 00:35:17.408 { 00:35:17.408 "id": 14, 00:35:17.408 "state": "FREE", 00:35:17.408 "validity": 0.0 00:35:17.408 }, 00:35:17.408 { 00:35:17.408 "id": 15, 00:35:17.408 "state": "FREE", 00:35:17.408 "validity": 0.0 00:35:17.408 }, 00:35:17.408 { 00:35:17.408 "id": 16, 00:35:17.408 "state": "FREE", 00:35:17.408 "validity": 0.0 00:35:17.408 }, 00:35:17.408 { 00:35:17.408 "id": 17, 00:35:17.408 "state": "FREE", 00:35:17.408 "validity": 0.0 00:35:17.408 } 00:35:17.408 ], 00:35:17.408 "read-only": true 00:35:17.408 }, 00:35:17.408 { 00:35:17.408 "name": "cache_device", 00:35:17.408 "type": "bdev", 00:35:17.408 "chunks": [ 00:35:17.408 { 00:35:17.408 "id": 0, 00:35:17.408 "state": "INACTIVE", 00:35:17.408 "utilization": 0.0 00:35:17.408 }, 00:35:17.408 { 00:35:17.408 "id": 1, 00:35:17.408 "state": "OPEN", 00:35:17.408 "utilization": 0.0 00:35:17.408 }, 00:35:17.408 { 00:35:17.408 "id": 2, 00:35:17.408 "state": "OPEN", 00:35:17.408 "utilization": 0.0 00:35:17.408 }, 00:35:17.408 { 00:35:17.408 "id": 3, 00:35:17.408 "state": "FREE", 00:35:17.408 "utilization": 0.0 00:35:17.408 }, 00:35:17.408 { 00:35:17.408 "id": 4, 00:35:17.408 "state": "FREE", 00:35:17.408 "utilization": 0.0 00:35:17.408 } 00:35:17.408 ], 00:35:17.408 "read-only": true 00:35:17.408 }, 00:35:17.408 { 00:35:17.408 "name": "verbose_mode", 00:35:17.408 "value": true, 00:35:17.408 "unit": "", 00:35:17.408 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:35:17.408 }, 00:35:17.408 { 00:35:17.408 "name": "prep_upgrade_on_shutdown", 00:35:17.408 "value": false, 00:35:17.408 "unit": "", 00:35:17.408 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:35:17.408 } 00:35:17.408 ] 00:35:17.408 } 00:35:17.666 17:36:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:35:17.666 17:36:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:17.666 17:36:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:35:17.927 17:36:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:35:17.927 17:36:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:35:17.927 17:36:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:35:17.927 17:36:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:17.927 17:36:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:35:18.185 Validate MD5 checksum, iteration 1 00:35:18.185 17:36:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:35:18.185 17:36:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:35:18.185 17:36:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:35:18.185 17:36:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:35:18.185 17:36:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:35:18.185 17:36:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:35:18.185 17:36:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:35:18.185 17:36:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:18.185 17:36:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:18.185 17:36:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:18.185 17:36:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:18.185 17:36:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:18.185 17:36:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:18.185 [2024-07-24 17:36:04.346481] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:35:18.185 [2024-07-24 17:36:04.347603] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85574 ] 00:35:18.444 [2024-07-24 17:36:04.524505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.702 [2024-07-24 17:36:04.808688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:23.624  Copying: 480/1024 [MB] (480 MBps) Copying: 864/1024 [MB] (384 MBps) Copying: 1024/1024 [MB] (average 425 MBps) 00:35:23.625 00:35:23.625 17:36:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:35:23.625 17:36:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:25.520 17:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:35:25.520 Validate MD5 checksum, iteration 2 00:35:25.520 17:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=48d4a5a2e13d07954262a01a22cb0125 00:35:25.520 17:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 48d4a5a2e13d07954262a01a22cb0125 != \4\8\d\4\a\5\a\2\e\1\3\d\0\7\9\5\4\2\6\2\a\0\1\a\2\2\c\b\0\1\2\5 ]] 00:35:25.520 17:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:35:25.520 17:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:35:25.520 17:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:35:25.520 17:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:25.520 17:36:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:25.520 17:36:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:25.520 17:36:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:25.520 17:36:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:25.520 17:36:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:25.520 [2024-07-24 17:36:11.568723] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:35:25.520 [2024-07-24 17:36:11.568863] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85648 ] 00:35:25.520 [2024-07-24 17:36:11.732067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.778 [2024-07-24 17:36:11.981242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:30.252  Copying: 446/1024 [MB] (446 MBps) Copying: 894/1024 [MB] (448 MBps) Copying: 1024/1024 [MB] (average 446 MBps) 00:35:30.252 00:35:30.252 17:36:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:35:30.252 17:36:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=58e39447f64e72aa0e2f77d82cb71f86 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 58e39447f64e72aa0e2f77d82cb71f86 != \5\8\e\3\9\4\4\7\f\6\4\e\7\2\a\a\0\e\2\f\7\7\d\8\2\c\b\7\1\f\8\6 ]] 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 85501 ]] 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 85501 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85721 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85721 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 85721 ']' 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:32.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:32.155 17:36:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:32.155 [2024-07-24 17:36:18.267769] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:35:32.156 [2024-07-24 17:36:18.267941] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85721 ] 00:35:32.414 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 85501 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:35:32.414 [2024-07-24 17:36:18.435756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.414 [2024-07-24 17:36:18.625853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.351 [2024-07-24 17:36:19.433884] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:35:33.351 [2024-07-24 17:36:19.434009] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:35:33.351 [2024-07-24 17:36:19.581585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.351 [2024-07-24 17:36:19.581640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:35:33.351 [2024-07-24 17:36:19.581714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:35:33.351 [2024-07-24 17:36:19.581728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.351 [2024-07-24 17:36:19.581816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.351 [2024-07-24 17:36:19.581837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:35:33.351 [2024-07-24 17:36:19.581851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:35:33.351 [2024-07-24 17:36:19.581863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.351 [2024-07-24 17:36:19.581905] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:35:33.351 [2024-07-24 17:36:19.582764] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:35:33.351 [2024-07-24 17:36:19.582799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.351 [2024-07-24 17:36:19.582830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:35:33.351 [2024-07-24 17:36:19.582843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.907 ms 00:35:33.351 [2024-07-24 17:36:19.582860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.351 [2024-07-24 17:36:19.583381] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:35:33.611 [2024-07-24 17:36:19.602294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.611 [2024-07-24 17:36:19.602339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:35:33.611 [2024-07-24 17:36:19.602384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.914 ms 00:35:33.611 [2024-07-24 17:36:19.602396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.611 [2024-07-24 17:36:19.612077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.611 [2024-07-24 17:36:19.612121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:35:33.611 [2024-07-24 17:36:19.612158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:35:33.611 [2024-07-24 17:36:19.612170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.611 [2024-07-24 17:36:19.612677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.611 [2024-07-24 17:36:19.612749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:35:33.611 [2024-07-24 17:36:19.612766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.398 ms 00:35:33.611 [2024-07-24 17:36:19.612778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.611 [2024-07-24 17:36:19.612848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.611 [2024-07-24 17:36:19.612870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:35:33.611 [2024-07-24 17:36:19.612900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:35:33.611 [2024-07-24 17:36:19.612911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.611 [2024-07-24 17:36:19.612954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.611 [2024-07-24 17:36:19.612973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:35:33.611 [2024-07-24 17:36:19.612991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:35:33.611 [2024-07-24 17:36:19.613002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.611 [2024-07-24 17:36:19.613037] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:35:33.611 [2024-07-24 17:36:19.616455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.611 [2024-07-24 17:36:19.616500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:35:33.611 [2024-07-24 17:36:19.616518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.425 ms 00:35:33.611 [2024-07-24 17:36:19.616529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.611 [2024-07-24 17:36:19.616568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.611 [2024-07-24 17:36:19.616587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:35:33.611 [2024-07-24 17:36:19.616599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:35:33.611 [2024-07-24 17:36:19.616611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.611 [2024-07-24 17:36:19.616701] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:35:33.611 [2024-07-24 17:36:19.616755] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:35:33.611 [2024-07-24 17:36:19.616800] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:35:33.611 [2024-07-24 17:36:19.616820] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:35:33.611 [2024-07-24 17:36:19.616912] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:35:33.611 [2024-07-24 17:36:19.616937] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:35:33.611 [2024-07-24 17:36:19.616953] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:35:33.611 [2024-07-24 17:36:19.616968] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:35:33.611 [2024-07-24 17:36:19.616983] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:35:33.611 [2024-07-24 17:36:19.616995] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:35:33.611 [2024-07-24 17:36:19.617013] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:35:33.611 [2024-07-24 17:36:19.617024] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:35:33.611 [2024-07-24 17:36:19.617036] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:35:33.611 [2024-07-24 17:36:19.617060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.611 [2024-07-24 17:36:19.617078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:35:33.611 [2024-07-24 17:36:19.617090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.363 ms 00:35:33.611 [2024-07-24 17:36:19.617102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.611 [2024-07-24 17:36:19.617207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.611 [2024-07-24 17:36:19.617225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:35:33.611 [2024-07-24 17:36:19.617237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:35:33.611 [2024-07-24 17:36:19.617255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.611 [2024-07-24 17:36:19.617354] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:35:33.611 [2024-07-24 17:36:19.617373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:35:33.611 [2024-07-24 17:36:19.617385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:33.611 [2024-07-24 17:36:19.617397] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:33.611 [2024-07-24 17:36:19.617409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:35:33.611 [2024-07-24 17:36:19.617420] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:35:33.611 [2024-07-24 17:36:19.617431] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:35:33.611 [2024-07-24 17:36:19.617442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:35:33.611 [2024-07-24 17:36:19.617453] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:35:33.611 [2024-07-24 17:36:19.617463] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:33.611 [2024-07-24 17:36:19.617474] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:35:33.611 [2024-07-24 17:36:19.617484] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:35:33.611 [2024-07-24 17:36:19.617495] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:33.611 [2024-07-24 17:36:19.617506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:35:33.611 [2024-07-24 17:36:19.617518] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:35:33.611 [2024-07-24 17:36:19.617529] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:33.611 [2024-07-24 17:36:19.617540] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:35:33.611 [2024-07-24 17:36:19.617551] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:35:33.611 [2024-07-24 17:36:19.617562] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:33.611 [2024-07-24 17:36:19.617575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:35:33.611 [2024-07-24 17:36:19.617586] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:35:33.611 [2024-07-24 17:36:19.617597] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:33.611 [2024-07-24 17:36:19.617608] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:35:33.611 [2024-07-24 17:36:19.617619] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:35:33.611 [2024-07-24 17:36:19.617647] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:33.611 [2024-07-24 17:36:19.617657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:35:33.611 [2024-07-24 17:36:19.617667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:35:33.611 [2024-07-24 17:36:19.617692] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:33.611 [2024-07-24 17:36:19.617707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:35:33.611 [2024-07-24 17:36:19.617719] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:35:33.612 [2024-07-24 17:36:19.617730] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:33.612 [2024-07-24 17:36:19.617741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:35:33.612 [2024-07-24 17:36:19.617752] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:35:33.612 [2024-07-24 17:36:19.617762] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:33.612 [2024-07-24 17:36:19.617773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:35:33.612 [2024-07-24 17:36:19.617783] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:35:33.612 [2024-07-24 17:36:19.617793] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:33.612 [2024-07-24 17:36:19.617804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:35:33.612 [2024-07-24 17:36:19.617814] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:35:33.612 [2024-07-24 17:36:19.617825] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:33.612 [2024-07-24 17:36:19.617836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:35:33.612 [2024-07-24 17:36:19.617846] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:35:33.612 [2024-07-24 17:36:19.617856] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:33.612 [2024-07-24 17:36:19.617867] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:35:33.612 [2024-07-24 17:36:19.617878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:35:33.612 [2024-07-24 17:36:19.617890] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:33.612 [2024-07-24 17:36:19.617903] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:33.612 [2024-07-24 17:36:19.617915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:35:33.612 [2024-07-24 17:36:19.617926] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:35:33.612 [2024-07-24 17:36:19.617954] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:35:33.612 [2024-07-24 17:36:19.617966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:35:33.612 [2024-07-24 17:36:19.617976] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:35:33.612 [2024-07-24 17:36:19.617987] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:35:33.612 [2024-07-24 17:36:19.617999] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:35:33.612 [2024-07-24 17:36:19.618019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:33.612 [2024-07-24 17:36:19.618033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:35:33.612 [2024-07-24 17:36:19.618045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:35:33.612 [2024-07-24 17:36:19.618056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:35:33.612 [2024-07-24 17:36:19.618067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:35:33.612 [2024-07-24 17:36:19.618078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:35:33.612 [2024-07-24 17:36:19.618089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:35:33.612 [2024-07-24 17:36:19.618100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:35:33.612 [2024-07-24 17:36:19.618112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:35:33.612 [2024-07-24 17:36:19.618129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:35:33.612 [2024-07-24 17:36:19.618141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:35:33.612 [2024-07-24 17:36:19.618153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:35:33.612 [2024-07-24 17:36:19.618164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:35:33.612 [2024-07-24 17:36:19.618175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:35:33.612 [2024-07-24 17:36:19.618187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:35:33.612 [2024-07-24 17:36:19.618198] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:35:33.612 [2024-07-24 17:36:19.618211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:33.612 [2024-07-24 17:36:19.618223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:33.612 [2024-07-24 17:36:19.618235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:35:33.612 [2024-07-24 17:36:19.618246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:35:33.612 [2024-07-24 17:36:19.618257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:35:33.612 [2024-07-24 17:36:19.618269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.612 [2024-07-24 17:36:19.618281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:35:33.612 [2024-07-24 17:36:19.618293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.971 ms 00:35:33.612 [2024-07-24 17:36:19.618309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.612 [2024-07-24 17:36:19.655012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.612 [2024-07-24 17:36:19.655302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:35:33.612 [2024-07-24 17:36:19.655476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.635 ms 00:35:33.612 [2024-07-24 17:36:19.655589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.612 [2024-07-24 17:36:19.655719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.612 [2024-07-24 17:36:19.655789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:35:33.612 [2024-07-24 17:36:19.655894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:35:33.612 [2024-07-24 17:36:19.656040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.612 [2024-07-24 17:36:19.693434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.612 [2024-07-24 17:36:19.693691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:35:33.612 [2024-07-24 17:36:19.693815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.258 ms 00:35:33.612 [2024-07-24 17:36:19.693870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.612 [2024-07-24 17:36:19.694158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.612 [2024-07-24 17:36:19.694283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:35:33.612 [2024-07-24 17:36:19.694417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:35:33.612 [2024-07-24 17:36:19.694475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.612 [2024-07-24 17:36:19.694762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.612 [2024-07-24 17:36:19.694836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:35:33.612 [2024-07-24 17:36:19.694965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:35:33.612 [2024-07-24 17:36:19.695022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.612 [2024-07-24 17:36:19.695218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.612 [2024-07-24 17:36:19.695344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:35:33.612 [2024-07-24 17:36:19.695404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:35:33.612 [2024-07-24 17:36:19.695513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.612 [2024-07-24 17:36:19.714888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.612 [2024-07-24 17:36:19.715087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:35:33.612 [2024-07-24 17:36:19.715235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.246 ms 00:35:33.612 [2024-07-24 17:36:19.715438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.612 [2024-07-24 17:36:19.715619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.612 [2024-07-24 17:36:19.715692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:35:33.612 [2024-07-24 17:36:19.715714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:35:33.612 [2024-07-24 17:36:19.715729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.612 [2024-07-24 17:36:19.758751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.612 [2024-07-24 17:36:19.758810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:35:33.612 [2024-07-24 17:36:19.758848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.977 ms 00:35:33.612 [2024-07-24 17:36:19.758863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.612 [2024-07-24 17:36:19.774539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.612 [2024-07-24 17:36:19.774590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:35:33.612 [2024-07-24 17:36:19.774625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.854 ms 00:35:33.612 [2024-07-24 17:36:19.774640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.872 [2024-07-24 17:36:19.872420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.872 [2024-07-24 17:36:19.872509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:35:33.872 [2024-07-24 17:36:19.872547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 97.620 ms 00:35:33.872 [2024-07-24 17:36:19.872562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.872 [2024-07-24 17:36:19.872913] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:35:33.872 [2024-07-24 17:36:19.873089] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:35:33.872 [2024-07-24 17:36:19.873280] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:35:33.872 [2024-07-24 17:36:19.873450] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:35:33.872 [2024-07-24 17:36:19.873475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.872 [2024-07-24 17:36:19.873491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:35:33.872 [2024-07-24 17:36:19.873515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.780 ms 00:35:33.872 [2024-07-24 17:36:19.873529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.872 [2024-07-24 17:36:19.873704] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:35:33.872 [2024-07-24 17:36:19.873734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.872 [2024-07-24 17:36:19.873749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:35:33.872 [2024-07-24 17:36:19.873764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:35:33.872 [2024-07-24 17:36:19.873777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.872 [2024-07-24 17:36:19.893552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.872 [2024-07-24 17:36:19.893593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:35:33.872 [2024-07-24 17:36:19.893625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.737 ms 00:35:33.872 [2024-07-24 17:36:19.893636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.872 [2024-07-24 17:36:19.903624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:33.872 [2024-07-24 17:36:19.903669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:35:33.872 [2024-07-24 17:36:19.903701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:35:33.872 [2024-07-24 17:36:19.903715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:33.872 [2024-07-24 17:36:19.904061] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:35:34.437 [2024-07-24 17:36:20.507280] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:35:34.437 [2024-07-24 17:36:20.507530] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:35:35.003 [2024-07-24 17:36:21.063704] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:35:35.003 [2024-07-24 17:36:21.064102] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:35:35.003 [2024-07-24 17:36:21.064129] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:35:35.003 [2024-07-24 17:36:21.064147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:35.003 [2024-07-24 17:36:21.064161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:35:35.003 [2024-07-24 17:36:21.064180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1160.342 ms 00:35:35.003 [2024-07-24 17:36:21.064192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:35.003 [2024-07-24 17:36:21.064242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:35.003 [2024-07-24 17:36:21.064259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:35:35.003 [2024-07-24 17:36:21.064272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:35.003 [2024-07-24 17:36:21.064284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:35.003 [2024-07-24 17:36:21.077860] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:35:35.003 [2024-07-24 17:36:21.078038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:35.003 [2024-07-24 17:36:21.078059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:35:35.003 [2024-07-24 17:36:21.078073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.718 ms 00:35:35.003 [2024-07-24 17:36:21.078085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:35.003 [2024-07-24 17:36:21.078855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:35.003 [2024-07-24 17:36:21.078893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:35:35.003 [2024-07-24 17:36:21.078910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.663 ms 00:35:35.003 [2024-07-24 17:36:21.078921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:35.003 [2024-07-24 17:36:21.081502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:35.003 [2024-07-24 17:36:21.081534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:35:35.003 [2024-07-24 17:36:21.081549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.547 ms 00:35:35.003 [2024-07-24 17:36:21.081560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:35.003 [2024-07-24 17:36:21.081647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:35.003 [2024-07-24 17:36:21.081664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:35:35.003 [2024-07-24 17:36:21.081691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:35:35.003 [2024-07-24 17:36:21.081707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:35.003 [2024-07-24 17:36:21.081841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:35.003 [2024-07-24 17:36:21.081862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:35:35.003 [2024-07-24 17:36:21.081875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:35:35.003 [2024-07-24 17:36:21.081886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:35.003 [2024-07-24 17:36:21.081916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:35.003 [2024-07-24 17:36:21.081930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:35:35.003 [2024-07-24 17:36:21.081942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:35:35.003 [2024-07-24 17:36:21.081952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:35.003 [2024-07-24 17:36:21.081993] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:35:35.003 [2024-07-24 17:36:21.082010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:35.003 [2024-07-24 17:36:21.082021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:35:35.003 [2024-07-24 17:36:21.082037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:35:35.003 [2024-07-24 17:36:21.082047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:35.003 [2024-07-24 17:36:21.082110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:35.003 [2024-07-24 17:36:21.082125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:35:35.003 [2024-07-24 17:36:21.082138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:35:35.003 [2024-07-24 17:36:21.082149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:35.003 [2024-07-24 17:36:21.083389] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1501.267 ms, result 0 00:35:35.003 [2024-07-24 17:36:21.098690] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:35.003 [2024-07-24 17:36:21.114646] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:35:35.003 [2024-07-24 17:36:21.124452] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:35.003 Validate MD5 checksum, iteration 1 00:35:35.003 17:36:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:35.003 17:36:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:35:35.003 17:36:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:35.003 17:36:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:35:35.003 17:36:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:35:35.003 17:36:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:35:35.003 17:36:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:35:35.003 17:36:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:35:35.003 17:36:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:35:35.003 17:36:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:35.003 17:36:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:35.003 17:36:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:35.003 17:36:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:35.003 17:36:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:35.003 17:36:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:35.260 [2024-07-24 17:36:21.263711] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:35:35.260 [2024-07-24 17:36:21.264442] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85758 ] 00:35:35.260 [2024-07-24 17:36:21.441193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.518 [2024-07-24 17:36:21.719894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.912  Copying: 513/1024 [MB] (513 MBps) Copying: 1004/1024 [MB] (491 MBps) Copying: 1024/1024 [MB] (average 502 MBps) 00:35:40.912 00:35:40.912 17:36:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:35:40.912 17:36:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:43.441 17:36:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:35:43.441 Validate MD5 checksum, iteration 2 00:35:43.441 17:36:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=48d4a5a2e13d07954262a01a22cb0125 00:35:43.441 17:36:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 48d4a5a2e13d07954262a01a22cb0125 != \4\8\d\4\a\5\a\2\e\1\3\d\0\7\9\5\4\2\6\2\a\0\1\a\2\2\c\b\0\1\2\5 ]] 00:35:43.441 17:36:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:35:43.441 17:36:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:35:43.441 17:36:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:35:43.441 17:36:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:43.441 17:36:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:43.441 17:36:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:43.441 17:36:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:43.441 17:36:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:43.441 17:36:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:43.441 [2024-07-24 17:36:29.417412] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:35:43.441 [2024-07-24 17:36:29.417591] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85842 ] 00:35:43.441 [2024-07-24 17:36:29.598258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.700 [2024-07-24 17:36:29.868492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:47.899  Copying: 476/1024 [MB] (476 MBps) Copying: 955/1024 [MB] (479 MBps) Copying: 1024/1024 [MB] (average 478 MBps) 00:35:47.899 00:35:47.899 17:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:35:47.899 17:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:49.827 17:36:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:35:49.827 17:36:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=58e39447f64e72aa0e2f77d82cb71f86 00:35:49.827 17:36:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 58e39447f64e72aa0e2f77d82cb71f86 != \5\8\e\3\9\4\4\7\f\6\4\e\7\2\a\a\0\e\2\f\7\7\d\8\2\c\b\7\1\f\8\6 ]] 00:35:49.827 17:36:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:35:49.827 17:36:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:35:49.827 17:36:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:35:49.827 17:36:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:35:49.827 17:36:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:35:49.827 17:36:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:49.827 17:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:35:49.827 17:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:35:49.827 17:36:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:35:49.827 17:36:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:35:49.827 17:36:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85721 ]] 00:35:49.827 17:36:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85721 00:35:49.827 17:36:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 85721 ']' 00:35:49.827 17:36:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 85721 00:35:49.827 17:36:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:35:49.827 17:36:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:49.827 17:36:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85721 00:35:50.085 killing process with pid 85721 00:35:50.085 17:36:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:50.085 17:36:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:50.085 17:36:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85721' 00:35:50.085 17:36:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 85721 00:35:50.085 17:36:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 85721 00:35:51.019 [2024-07-24 17:36:37.225967] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:35:51.019 [2024-07-24 17:36:37.246687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:51.019 [2024-07-24 17:36:37.246768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:35:51.019 [2024-07-24 17:36:37.246791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:35:51.019 [2024-07-24 17:36:37.246803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.019 [2024-07-24 17:36:37.246833] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:35:51.019 [2024-07-24 17:36:37.251265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:51.019 [2024-07-24 17:36:37.251309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:35:51.019 [2024-07-24 17:36:37.251325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.411 ms 00:35:51.019 [2024-07-24 17:36:37.251337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.019 [2024-07-24 17:36:37.251629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:51.019 [2024-07-24 17:36:37.251648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:35:51.019 [2024-07-24 17:36:37.251661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.263 ms 00:35:51.019 [2024-07-24 17:36:37.251672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.019 [2024-07-24 17:36:37.253110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:51.019 [2024-07-24 17:36:37.253195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:35:51.019 [2024-07-24 17:36:37.253246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.417 ms 00:35:51.019 [2024-07-24 17:36:37.253271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.019 [2024-07-24 17:36:37.254507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:51.019 [2024-07-24 17:36:37.254568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:35:51.019 [2024-07-24 17:36:37.254614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.195 ms 00:35:51.019 [2024-07-24 17:36:37.254624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.278 [2024-07-24 17:36:37.269171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:51.278 [2024-07-24 17:36:37.269241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:35:51.278 [2024-07-24 17:36:37.269280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.455 ms 00:35:51.278 [2024-07-24 17:36:37.269291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.278 [2024-07-24 17:36:37.277117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:51.278 [2024-07-24 17:36:37.277159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:35:51.278 [2024-07-24 17:36:37.277176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.771 ms 00:35:51.278 [2024-07-24 17:36:37.277188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.278 [2024-07-24 17:36:37.277338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:51.278 [2024-07-24 17:36:37.277365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:35:51.278 [2024-07-24 17:36:37.277379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:35:51.278 [2024-07-24 17:36:37.277393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.278 [2024-07-24 17:36:37.291493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:51.278 [2024-07-24 17:36:37.291531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:35:51.278 [2024-07-24 17:36:37.291578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.062 ms 00:35:51.278 [2024-07-24 17:36:37.291589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.278 [2024-07-24 17:36:37.306096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:51.278 [2024-07-24 17:36:37.306139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:35:51.278 [2024-07-24 17:36:37.306156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.451 ms 00:35:51.278 [2024-07-24 17:36:37.306167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.278 [2024-07-24 17:36:37.319725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:51.278 [2024-07-24 17:36:37.319767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:35:51.278 [2024-07-24 17:36:37.319783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.516 ms 00:35:51.278 [2024-07-24 17:36:37.319794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.278 [2024-07-24 17:36:37.332826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:51.278 [2024-07-24 17:36:37.332881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:35:51.278 [2024-07-24 17:36:37.332898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.957 ms 00:35:51.278 [2024-07-24 17:36:37.332923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.278 [2024-07-24 17:36:37.332980] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:35:51.278 [2024-07-24 17:36:37.333019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:35:51.278 [2024-07-24 17:36:37.333034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:35:51.278 [2024-07-24 17:36:37.333046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:35:51.278 [2024-07-24 17:36:37.333058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:51.278 [2024-07-24 17:36:37.333070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:51.278 [2024-07-24 17:36:37.333082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:51.278 [2024-07-24 17:36:37.333094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:51.278 [2024-07-24 17:36:37.333105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:51.278 [2024-07-24 17:36:37.333117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:51.278 [2024-07-24 17:36:37.333129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:51.278 [2024-07-24 17:36:37.333141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:51.278 [2024-07-24 17:36:37.333153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:51.278 [2024-07-24 17:36:37.333164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:51.278 [2024-07-24 17:36:37.333176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:51.278 [2024-07-24 17:36:37.333187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:51.278 [2024-07-24 17:36:37.333199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:51.278 [2024-07-24 17:36:37.333210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:51.278 [2024-07-24 17:36:37.333238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:51.278 [2024-07-24 17:36:37.333252] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:35:51.278 [2024-07-24 17:36:37.333264] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: e451d736-3d29-4ee9-85e9-d186cc06d64f 00:35:51.278 [2024-07-24 17:36:37.333276] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:35:51.278 [2024-07-24 17:36:37.333288] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:35:51.278 [2024-07-24 17:36:37.333298] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:35:51.278 [2024-07-24 17:36:37.333310] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:35:51.278 [2024-07-24 17:36:37.333320] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:35:51.278 [2024-07-24 17:36:37.333331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:35:51.278 [2024-07-24 17:36:37.333347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:35:51.278 [2024-07-24 17:36:37.333357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:35:51.278 [2024-07-24 17:36:37.333367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:35:51.278 [2024-07-24 17:36:37.333380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:51.278 [2024-07-24 17:36:37.333391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:35:51.278 [2024-07-24 17:36:37.333404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.401 ms 00:35:51.278 [2024-07-24 17:36:37.333436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.278 [2024-07-24 17:36:37.353338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:51.278 [2024-07-24 17:36:37.353390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:35:51.278 [2024-07-24 17:36:37.353422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.840 ms 00:35:51.278 [2024-07-24 17:36:37.353440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.278 [2024-07-24 17:36:37.354022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:51.278 [2024-07-24 17:36:37.354049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:35:51.278 [2024-07-24 17:36:37.354064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.542 ms 00:35:51.278 [2024-07-24 17:36:37.354075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.278 [2024-07-24 17:36:37.414731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:51.278 [2024-07-24 17:36:37.414830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:35:51.278 [2024-07-24 17:36:37.414849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:51.278 [2024-07-24 17:36:37.414868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.278 [2024-07-24 17:36:37.414945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:51.278 [2024-07-24 17:36:37.414976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:35:51.278 [2024-07-24 17:36:37.414988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:51.278 [2024-07-24 17:36:37.414999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.278 [2024-07-24 17:36:37.415135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:51.278 [2024-07-24 17:36:37.415156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:35:51.278 [2024-07-24 17:36:37.415169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:51.278 [2024-07-24 17:36:37.415181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.278 [2024-07-24 17:36:37.415213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:51.278 [2024-07-24 17:36:37.415227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:35:51.278 [2024-07-24 17:36:37.415239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:51.278 [2024-07-24 17:36:37.415250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.537 [2024-07-24 17:36:37.534951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:51.537 [2024-07-24 17:36:37.535019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:35:51.537 [2024-07-24 17:36:37.535064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:51.537 [2024-07-24 17:36:37.535085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.537 [2024-07-24 17:36:37.635057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:51.537 [2024-07-24 17:36:37.635121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:35:51.537 [2024-07-24 17:36:37.635141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:51.537 [2024-07-24 17:36:37.635153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.537 [2024-07-24 17:36:37.635286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:51.537 [2024-07-24 17:36:37.635305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:35:51.537 [2024-07-24 17:36:37.635318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:51.537 [2024-07-24 17:36:37.635330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.537 [2024-07-24 17:36:37.635402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:51.537 [2024-07-24 17:36:37.635429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:35:51.537 [2024-07-24 17:36:37.635442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:51.537 [2024-07-24 17:36:37.635453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.537 [2024-07-24 17:36:37.635635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:51.537 [2024-07-24 17:36:37.635654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:35:51.537 [2024-07-24 17:36:37.635666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:51.537 [2024-07-24 17:36:37.635676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.537 [2024-07-24 17:36:37.635763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:51.537 [2024-07-24 17:36:37.635803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:35:51.537 [2024-07-24 17:36:37.635816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:51.537 [2024-07-24 17:36:37.635826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.537 [2024-07-24 17:36:37.635904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:51.537 [2024-07-24 17:36:37.635936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:35:51.537 [2024-07-24 17:36:37.635948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:51.537 [2024-07-24 17:36:37.635960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.537 [2024-07-24 17:36:37.636016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:51.537 [2024-07-24 17:36:37.636039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:35:51.537 [2024-07-24 17:36:37.636052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:51.537 [2024-07-24 17:36:37.636063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:51.537 [2024-07-24 17:36:37.636213] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 389.535 ms, result 0 00:35:52.914 17:36:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:35:52.914 17:36:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:52.914 17:36:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:35:52.914 17:36:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:35:52.914 17:36:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:35:52.914 17:36:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:35:52.914 Remove shared memory files 00:35:52.914 17:36:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:35:52.914 17:36:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:52.914 17:36:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:35:52.914 17:36:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:35:52.914 17:36:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid85501 00:35:52.914 17:36:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:52.914 17:36:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:35:52.914 ************************************ 00:35:52.914 END TEST ftl_upgrade_shutdown 00:35:52.914 ************************************ 00:35:52.914 00:35:52.914 real 1m35.516s 00:35:52.914 user 2m15.937s 00:35:52.914 sys 0m24.271s 00:35:52.914 17:36:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:52.914 17:36:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:52.914 17:36:38 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:35:52.914 17:36:38 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:35:52.914 17:36:38 ftl -- ftl/ftl.sh@14 -- # killprocess 77812 00:35:52.914 17:36:38 ftl -- common/autotest_common.sh@950 -- # '[' -z 77812 ']' 00:35:52.914 17:36:38 ftl -- common/autotest_common.sh@954 -- # kill -0 77812 00:35:52.914 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (77812) - No such process 00:35:52.914 Process with pid 77812 is not found 00:35:52.914 17:36:38 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 77812 is not found' 00:35:52.914 17:36:38 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:35:52.914 17:36:38 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=85970 00:35:52.914 17:36:38 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:52.914 17:36:38 ftl -- ftl/ftl.sh@20 -- # waitforlisten 85970 00:35:52.914 17:36:38 ftl -- common/autotest_common.sh@831 -- # '[' -z 85970 ']' 00:35:52.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:52.914 17:36:38 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:52.914 17:36:38 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:52.914 17:36:38 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:52.914 17:36:38 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:52.914 17:36:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:52.914 [2024-07-24 17:36:38.916575] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:35:52.914 [2024-07-24 17:36:38.916836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85970 ] 00:35:52.914 [2024-07-24 17:36:39.088604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.173 [2024-07-24 17:36:39.311440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:54.109 17:36:40 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:54.109 17:36:40 ftl -- common/autotest_common.sh@864 -- # return 0 00:35:54.109 17:36:40 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:35:54.109 nvme0n1 00:35:54.109 17:36:40 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:35:54.109 17:36:40 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:54.109 17:36:40 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:54.676 17:36:40 ftl -- ftl/common.sh@28 -- # stores=24a70465-9831-4525-909a-e7881c381e9e 00:35:54.676 17:36:40 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:35:54.676 17:36:40 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 24a70465-9831-4525-909a-e7881c381e9e 00:35:54.676 17:36:40 ftl -- ftl/ftl.sh@23 -- # killprocess 85970 00:35:54.676 17:36:40 ftl -- common/autotest_common.sh@950 -- # '[' -z 85970 ']' 00:35:54.676 17:36:40 ftl -- common/autotest_common.sh@954 -- # kill -0 85970 00:35:54.676 17:36:40 ftl -- common/autotest_common.sh@955 -- # uname 00:35:54.676 17:36:40 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:54.676 17:36:40 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85970 00:35:54.676 killing process with pid 85970 00:35:54.676 17:36:40 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:54.676 17:36:40 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:54.676 17:36:40 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85970' 00:35:54.676 17:36:40 ftl -- common/autotest_common.sh@969 -- # kill 85970 00:35:54.676 17:36:40 ftl -- common/autotest_common.sh@974 -- # wait 85970 00:35:56.578 17:36:42 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:56.836 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:56.836 Waiting for block devices as requested 00:35:56.836 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:57.095 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:57.095 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:57.095 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:02.356 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:02.356 17:36:48 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:02.356 Remove shared memory files 00:36:02.356 17:36:48 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:02.356 17:36:48 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:02.356 17:36:48 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:02.356 17:36:48 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:02.356 17:36:48 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:02.356 17:36:48 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:02.356 ************************************ 00:36:02.356 END TEST ftl 00:36:02.356 ************************************ 00:36:02.356 00:36:02.356 real 12m24.583s 00:36:02.356 user 15m31.543s 00:36:02.356 sys 1m32.886s 00:36:02.356 17:36:48 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:02.356 17:36:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:02.356 17:36:48 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:02.356 17:36:48 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:36:02.356 17:36:48 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:02.356 17:36:48 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:36:02.356 17:36:48 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:02.356 17:36:48 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:02.356 17:36:48 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:02.356 17:36:48 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:36:02.356 17:36:48 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:36:02.356 17:36:48 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:36:02.356 17:36:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:02.356 17:36:48 -- common/autotest_common.sh@10 -- # set +x 00:36:02.356 17:36:48 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:36:02.356 17:36:48 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:36:02.356 17:36:48 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:36:02.356 17:36:48 -- common/autotest_common.sh@10 -- # set +x 00:36:03.286 INFO: APP EXITING 00:36:03.286 INFO: killing all VMs 00:36:03.286 INFO: killing vhost app 00:36:03.286 INFO: EXIT DONE 00:36:03.543 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:03.801 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:03.801 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:03.801 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:03.801 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:04.367 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:04.625 Cleaning 00:36:04.625 Removing: /var/run/dpdk/spdk0/config 00:36:04.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:04.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:04.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:04.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:04.625 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:04.625 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:04.625 Removing: /var/run/dpdk/spdk0 00:36:04.625 Removing: /var/run/dpdk/spdk_pid61778 00:36:04.625 Removing: /var/run/dpdk/spdk_pid62000 00:36:04.625 Removing: /var/run/dpdk/spdk_pid62221 00:36:04.625 Removing: /var/run/dpdk/spdk_pid62325 00:36:04.625 Removing: /var/run/dpdk/spdk_pid62375 00:36:04.625 Removing: /var/run/dpdk/spdk_pid62509 00:36:04.625 Removing: /var/run/dpdk/spdk_pid62527 00:36:04.625 Removing: /var/run/dpdk/spdk_pid62707 00:36:04.625 Removing: /var/run/dpdk/spdk_pid62811 00:36:04.625 Removing: /var/run/dpdk/spdk_pid62910 00:36:04.625 Removing: /var/run/dpdk/spdk_pid63024 00:36:04.625 Removing: /var/run/dpdk/spdk_pid63118 00:36:04.625 Removing: /var/run/dpdk/spdk_pid63158 00:36:04.625 Removing: /var/run/dpdk/spdk_pid63200 00:36:04.625 Removing: /var/run/dpdk/spdk_pid63268 00:36:04.625 Removing: /var/run/dpdk/spdk_pid63374 00:36:04.625 Removing: /var/run/dpdk/spdk_pid63834 00:36:04.625 Removing: /var/run/dpdk/spdk_pid63905 00:36:04.625 Removing: /var/run/dpdk/spdk_pid63973 00:36:04.625 Removing: /var/run/dpdk/spdk_pid63995 00:36:04.625 Removing: /var/run/dpdk/spdk_pid64144 00:36:04.625 Removing: /var/run/dpdk/spdk_pid64160 00:36:04.625 Removing: /var/run/dpdk/spdk_pid64308 00:36:04.625 Removing: /var/run/dpdk/spdk_pid64324 00:36:04.625 Removing: /var/run/dpdk/spdk_pid64388 00:36:04.625 Removing: /var/run/dpdk/spdk_pid64412 00:36:04.625 Removing: /var/run/dpdk/spdk_pid64476 00:36:04.625 Removing: /var/run/dpdk/spdk_pid64499 00:36:04.625 Removing: /var/run/dpdk/spdk_pid64683 00:36:04.625 Removing: /var/run/dpdk/spdk_pid64725 00:36:04.625 Removing: /var/run/dpdk/spdk_pid64806 00:36:04.625 Removing: /var/run/dpdk/spdk_pid64973 00:36:04.625 Removing: /var/run/dpdk/spdk_pid65068 00:36:04.625 Removing: /var/run/dpdk/spdk_pid65110 00:36:04.625 Removing: /var/run/dpdk/spdk_pid65581 00:36:04.625 Removing: /var/run/dpdk/spdk_pid65679 00:36:04.625 Removing: /var/run/dpdk/spdk_pid65799 00:36:04.625 Removing: /var/run/dpdk/spdk_pid65852 00:36:04.625 Removing: /var/run/dpdk/spdk_pid65889 00:36:04.625 Removing: /var/run/dpdk/spdk_pid65965 00:36:04.625 Removing: /var/run/dpdk/spdk_pid66597 00:36:04.625 Removing: /var/run/dpdk/spdk_pid66644 00:36:04.625 Removing: /var/run/dpdk/spdk_pid67152 00:36:04.625 Removing: /var/run/dpdk/spdk_pid67256 00:36:04.625 Removing: /var/run/dpdk/spdk_pid67381 00:36:04.625 Removing: /var/run/dpdk/spdk_pid67434 00:36:04.625 Removing: /var/run/dpdk/spdk_pid67461 00:36:04.625 Removing: /var/run/dpdk/spdk_pid67491 00:36:04.625 Removing: /var/run/dpdk/spdk_pid69369 00:36:04.625 Removing: /var/run/dpdk/spdk_pid69517 00:36:04.625 Removing: /var/run/dpdk/spdk_pid69521 00:36:04.625 Removing: /var/run/dpdk/spdk_pid69533 00:36:04.885 Removing: /var/run/dpdk/spdk_pid69581 00:36:04.885 Removing: /var/run/dpdk/spdk_pid69585 00:36:04.885 Removing: /var/run/dpdk/spdk_pid69597 00:36:04.885 Removing: /var/run/dpdk/spdk_pid69646 00:36:04.885 Removing: /var/run/dpdk/spdk_pid69651 00:36:04.885 Removing: /var/run/dpdk/spdk_pid69663 00:36:04.885 Removing: /var/run/dpdk/spdk_pid69708 00:36:04.885 Removing: /var/run/dpdk/spdk_pid69712 00:36:04.885 Removing: /var/run/dpdk/spdk_pid69724 00:36:04.885 Removing: /var/run/dpdk/spdk_pid71078 00:36:04.885 Removing: /var/run/dpdk/spdk_pid71183 00:36:04.885 Removing: /var/run/dpdk/spdk_pid72591 00:36:04.885 Removing: /var/run/dpdk/spdk_pid73937 00:36:04.885 Removing: /var/run/dpdk/spdk_pid74063 00:36:04.885 Removing: /var/run/dpdk/spdk_pid74188 00:36:04.885 Removing: /var/run/dpdk/spdk_pid74303 00:36:04.885 Removing: /var/run/dpdk/spdk_pid74454 00:36:04.885 Removing: /var/run/dpdk/spdk_pid74528 00:36:04.885 Removing: /var/run/dpdk/spdk_pid74668 00:36:04.885 Removing: /var/run/dpdk/spdk_pid75041 00:36:04.885 Removing: /var/run/dpdk/spdk_pid75083 00:36:04.885 Removing: /var/run/dpdk/spdk_pid75550 00:36:04.885 Removing: /var/run/dpdk/spdk_pid75741 00:36:04.885 Removing: /var/run/dpdk/spdk_pid75840 00:36:04.885 Removing: /var/run/dpdk/spdk_pid75957 00:36:04.885 Removing: /var/run/dpdk/spdk_pid76016 00:36:04.885 Removing: /var/run/dpdk/spdk_pid76047 00:36:04.885 Removing: /var/run/dpdk/spdk_pid76340 00:36:04.885 Removing: /var/run/dpdk/spdk_pid76406 00:36:04.885 Removing: /var/run/dpdk/spdk_pid76485 00:36:04.885 Removing: /var/run/dpdk/spdk_pid76874 00:36:04.885 Removing: /var/run/dpdk/spdk_pid77024 00:36:04.885 Removing: /var/run/dpdk/spdk_pid77812 00:36:04.885 Removing: /var/run/dpdk/spdk_pid77946 00:36:04.885 Removing: /var/run/dpdk/spdk_pid78143 00:36:04.885 Removing: /var/run/dpdk/spdk_pid78251 00:36:04.885 Removing: /var/run/dpdk/spdk_pid78632 00:36:04.885 Removing: /var/run/dpdk/spdk_pid78907 00:36:04.885 Removing: /var/run/dpdk/spdk_pid79264 00:36:04.885 Removing: /var/run/dpdk/spdk_pid79459 00:36:04.885 Removing: /var/run/dpdk/spdk_pid79606 00:36:04.885 Removing: /var/run/dpdk/spdk_pid79670 00:36:04.885 Removing: /var/run/dpdk/spdk_pid79825 00:36:04.885 Removing: /var/run/dpdk/spdk_pid79859 00:36:04.885 Removing: /var/run/dpdk/spdk_pid79921 00:36:04.885 Removing: /var/run/dpdk/spdk_pid80133 00:36:04.885 Removing: /var/run/dpdk/spdk_pid80359 00:36:04.885 Removing: /var/run/dpdk/spdk_pid80817 00:36:04.885 Removing: /var/run/dpdk/spdk_pid81301 00:36:04.885 Removing: /var/run/dpdk/spdk_pid81780 00:36:04.885 Removing: /var/run/dpdk/spdk_pid82334 00:36:04.885 Removing: /var/run/dpdk/spdk_pid82488 00:36:04.885 Removing: /var/run/dpdk/spdk_pid82591 00:36:04.885 Removing: /var/run/dpdk/spdk_pid83363 00:36:04.885 Removing: /var/run/dpdk/spdk_pid83444 00:36:04.885 Removing: /var/run/dpdk/spdk_pid83925 00:36:04.885 Removing: /var/run/dpdk/spdk_pid84361 00:36:04.885 Removing: /var/run/dpdk/spdk_pid84897 00:36:04.885 Removing: /var/run/dpdk/spdk_pid85014 00:36:04.885 Removing: /var/run/dpdk/spdk_pid85071 00:36:04.885 Removing: /var/run/dpdk/spdk_pid85142 00:36:04.885 Removing: /var/run/dpdk/spdk_pid85204 00:36:04.885 Removing: /var/run/dpdk/spdk_pid85278 00:36:04.885 Removing: /var/run/dpdk/spdk_pid85501 00:36:04.885 Removing: /var/run/dpdk/spdk_pid85574 00:36:04.885 Removing: /var/run/dpdk/spdk_pid85648 00:36:04.885 Removing: /var/run/dpdk/spdk_pid85721 00:36:04.885 Removing: /var/run/dpdk/spdk_pid85758 00:36:04.885 Removing: /var/run/dpdk/spdk_pid85842 00:36:04.885 Removing: /var/run/dpdk/spdk_pid85970 00:36:04.885 Clean 00:36:04.885 17:36:51 -- common/autotest_common.sh@1451 -- # return 0 00:36:04.885 17:36:51 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:36:04.885 17:36:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:04.885 17:36:51 -- common/autotest_common.sh@10 -- # set +x 00:36:05.143 17:36:51 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:36:05.143 17:36:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:05.143 17:36:51 -- common/autotest_common.sh@10 -- # set +x 00:36:05.143 17:36:51 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:05.143 17:36:51 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:05.143 17:36:51 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:05.143 17:36:51 -- spdk/autotest.sh@395 -- # hash lcov 00:36:05.143 17:36:51 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:05.143 17:36:51 -- spdk/autotest.sh@397 -- # hostname 00:36:05.143 17:36:51 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:05.401 geninfo: WARNING: invalid characters removed from testname! 00:36:31.941 17:37:14 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:31.941 17:37:18 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:34.508 17:37:20 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:37.035 17:37:23 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:39.562 17:37:25 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:42.850 17:37:28 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:44.751 17:37:30 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:44.751 17:37:30 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:44.751 17:37:30 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:44.751 17:37:30 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:44.751 17:37:30 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:44.751 17:37:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.751 17:37:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.751 17:37:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.751 17:37:30 -- paths/export.sh@5 -- $ export PATH 00:36:44.751 17:37:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.751 17:37:30 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:36:44.751 17:37:30 -- common/autobuild_common.sh@447 -- $ date +%s 00:36:44.751 17:37:30 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721842650.XXXXXX 00:36:44.752 17:37:30 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721842650.yKoeuF 00:36:44.752 17:37:30 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:36:44.752 17:37:30 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:36:44.752 17:37:30 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:36:44.752 17:37:30 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:36:44.752 17:37:30 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:36:44.752 17:37:30 -- common/autobuild_common.sh@463 -- $ get_config_params 00:36:44.752 17:37:30 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:36:44.752 17:37:30 -- common/autotest_common.sh@10 -- $ set +x 00:36:44.752 17:37:30 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:36:44.752 17:37:30 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:36:44.752 17:37:30 -- pm/common@17 -- $ local monitor 00:36:44.752 17:37:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:44.752 17:37:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:44.752 17:37:30 -- pm/common@25 -- $ sleep 1 00:36:44.752 17:37:30 -- pm/common@21 -- $ date +%s 00:36:44.752 17:37:30 -- pm/common@21 -- $ date +%s 00:36:44.752 17:37:30 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721842650 00:36:44.752 17:37:30 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721842650 00:36:44.752 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721842650_collect-vmstat.pm.log 00:36:44.752 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721842650_collect-cpu-load.pm.log 00:36:45.708 17:37:31 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:36:45.708 17:37:31 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:36:45.708 17:37:31 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:36:45.708 17:37:31 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:45.708 17:37:31 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:45.708 17:37:31 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:45.708 17:37:31 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:45.708 17:37:31 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:45.708 17:37:31 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:45.966 17:37:31 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:45.966 17:37:31 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:45.966 17:37:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:45.966 17:37:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:45.967 17:37:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:45.967 17:37:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:36:45.967 17:37:31 -- pm/common@44 -- $ pid=87644 00:36:45.967 17:37:31 -- pm/common@50 -- $ kill -TERM 87644 00:36:45.967 17:37:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:45.967 17:37:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:36:45.967 17:37:31 -- pm/common@44 -- $ pid=87646 00:36:45.967 17:37:31 -- pm/common@50 -- $ kill -TERM 87646 00:36:45.967 + [[ -n 5189 ]] 00:36:45.967 + sudo kill 5189 00:36:45.977 [Pipeline] } 00:36:45.998 [Pipeline] // timeout 00:36:46.004 [Pipeline] } 00:36:46.021 [Pipeline] // stage 00:36:46.027 [Pipeline] } 00:36:46.044 [Pipeline] // catchError 00:36:46.054 [Pipeline] stage 00:36:46.057 [Pipeline] { (Stop VM) 00:36:46.072 [Pipeline] sh 00:36:46.351 + vagrant halt 00:36:49.643 ==> default: Halting domain... 00:36:56.234 [Pipeline] sh 00:36:56.513 + vagrant destroy -f 00:36:59.796 ==> default: Removing domain... 00:37:00.066 [Pipeline] sh 00:37:00.347 + mv output /var/jenkins/workspace/nvme-vg-autotest_3/output 00:37:00.356 [Pipeline] } 00:37:00.375 [Pipeline] // stage 00:37:00.382 [Pipeline] } 00:37:00.399 [Pipeline] // dir 00:37:00.405 [Pipeline] } 00:37:00.426 [Pipeline] // wrap 00:37:00.432 [Pipeline] } 00:37:00.443 [Pipeline] // catchError 00:37:00.453 [Pipeline] stage 00:37:00.455 [Pipeline] { (Epilogue) 00:37:00.470 [Pipeline] sh 00:37:00.751 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:07.345 [Pipeline] catchError 00:37:07.347 [Pipeline] { 00:37:07.357 [Pipeline] sh 00:37:07.629 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:07.887 Artifacts sizes are good 00:37:07.897 [Pipeline] } 00:37:07.914 [Pipeline] // catchError 00:37:07.925 [Pipeline] archiveArtifacts 00:37:07.931 Archiving artifacts 00:37:08.071 [Pipeline] cleanWs 00:37:08.084 [WS-CLEANUP] Deleting project workspace... 00:37:08.084 [WS-CLEANUP] Deferred wipeout is used... 00:37:08.090 [WS-CLEANUP] done 00:37:08.092 [Pipeline] } 00:37:08.112 [Pipeline] // stage 00:37:08.118 [Pipeline] } 00:37:08.134 [Pipeline] // node 00:37:08.140 [Pipeline] End of Pipeline 00:37:08.181 Finished: SUCCESS